2025-05-13 19:04:30.969108 | Job console starting 2025-05-13 19:04:30.983807 | Updating git repos 2025-05-13 19:04:31.033078 | Cloning repos into workspace 2025-05-13 19:04:31.199831 | Restoring repo states 2025-05-13 19:04:31.232838 | Merging changes 2025-05-13 19:04:31.232948 | Checking out repos 2025-05-13 19:04:31.461539 | Preparing playbooks 2025-05-13 19:04:32.112424 | Running Ansible setup 2025-05-13 19:04:37.616864 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-05-13 19:04:38.399493 | 2025-05-13 19:04:38.399697 | PLAY [Base pre] 2025-05-13 19:04:38.416898 | 2025-05-13 19:04:38.417054 | TASK [Setup log path fact] 2025-05-13 19:04:38.449184 | orchestrator | ok 2025-05-13 19:04:38.467212 | 2025-05-13 19:04:38.467353 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-13 19:04:38.509578 | orchestrator | ok 2025-05-13 19:04:38.522112 | 2025-05-13 19:04:38.522226 | TASK [emit-job-header : Print job information] 2025-05-13 19:04:38.580814 | # Job Information 2025-05-13 19:04:38.581106 | Ansible Version: 2.16.14 2025-05-13 19:04:38.581166 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-05-13 19:04:38.581225 | Pipeline: post 2025-05-13 19:04:38.581266 | Executor: 521e9411259a 2025-05-13 19:04:38.581302 | Triggered by: https://github.com/osism/testbed/commit/7a2982f0ad85e5ec91926fc144b5bb2890e2c9a1 2025-05-13 19:04:38.581340 | Event ID: f8b8c79e-302a-11f0-908b-2b7949b6a9c0 2025-05-13 19:04:38.591794 | 2025-05-13 19:04:38.591931 | LOOP [emit-job-header : Print node information] 2025-05-13 19:04:38.725937 | orchestrator | ok: 2025-05-13 19:04:38.726236 | orchestrator | # Node Information 2025-05-13 19:04:38.726293 | orchestrator | Inventory Hostname: orchestrator 2025-05-13 19:04:38.726334 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-05-13 19:04:38.726369 | orchestrator | Username: zuul-testbed03 2025-05-13 19:04:38.726403 | orchestrator | Distro: Debian 12.10 2025-05-13 19:04:38.726441 | orchestrator | Provider: static-testbed 2025-05-13 19:04:38.726475 | orchestrator | Region: 2025-05-13 19:04:38.726509 | orchestrator | Label: testbed-orchestrator 2025-05-13 19:04:38.726541 | orchestrator | Product Name: OpenStack Nova 2025-05-13 19:04:38.726573 | orchestrator | Interface IP: 81.163.193.140 2025-05-13 19:04:38.755231 | 2025-05-13 19:04:38.755446 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-05-13 19:04:39.264783 | orchestrator -> localhost | changed 2025-05-13 19:04:39.282531 | 2025-05-13 19:04:39.282732 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-05-13 19:04:40.370250 | orchestrator -> localhost | changed 2025-05-13 19:04:40.397156 | 2025-05-13 19:04:40.397359 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-05-13 19:04:40.697708 | orchestrator -> localhost | ok 2025-05-13 19:04:40.709055 | 2025-05-13 19:04:40.709228 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-05-13 19:04:40.743835 | orchestrator | ok 2025-05-13 19:04:40.763144 | orchestrator | included: /var/lib/zuul/builds/221dbb57df2c4a04a6bf0721f15dc81e/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-05-13 19:04:40.771400 | 2025-05-13 19:04:40.771498 | TASK [add-build-sshkey : Create Temp SSH key] 2025-05-13 19:04:42.251279 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-05-13 19:04:42.251854 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/221dbb57df2c4a04a6bf0721f15dc81e/work/221dbb57df2c4a04a6bf0721f15dc81e_id_rsa 2025-05-13 19:04:42.251971 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/221dbb57df2c4a04a6bf0721f15dc81e/work/221dbb57df2c4a04a6bf0721f15dc81e_id_rsa.pub 2025-05-13 19:04:42.252046 | orchestrator -> localhost | The key fingerprint is: 2025-05-13 19:04:42.252114 | orchestrator -> localhost | SHA256:KNxTYhm4Ap2vz9SImBagewYncc38h5CXJBgJyvg8dns zuul-build-sshkey 2025-05-13 19:04:42.252180 | orchestrator -> localhost | The key's randomart image is: 2025-05-13 19:04:42.252272 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-05-13 19:04:42.252338 | orchestrator -> localhost | |.o.O.+o. | 2025-05-13 19:04:42.252402 | orchestrator -> localhost | |B * B.oo | 2025-05-13 19:04:42.252460 | orchestrator -> localhost | |== . =+.. | 2025-05-13 19:04:42.252516 | orchestrator -> localhost | |++o.ooo+. | 2025-05-13 19:04:42.252571 | orchestrator -> localhost | | B*++o+.S | 2025-05-13 19:04:42.252679 | orchestrator -> localhost | |+o=ooo.. | 2025-05-13 19:04:42.252743 | orchestrator -> localhost | |.o +. E | 2025-05-13 19:04:42.252800 | orchestrator -> localhost | | o. | 2025-05-13 19:04:42.252860 | orchestrator -> localhost | | | 2025-05-13 19:04:42.252918 | orchestrator -> localhost | +----[SHA256]-----+ 2025-05-13 19:04:42.253050 | orchestrator -> localhost | ok: Runtime: 0:00:00.948964 2025-05-13 19:04:42.272974 | 2025-05-13 19:04:42.273159 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-05-13 19:04:42.312542 | orchestrator | ok 2025-05-13 19:04:42.326129 | orchestrator | included: /var/lib/zuul/builds/221dbb57df2c4a04a6bf0721f15dc81e/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-05-13 19:04:42.335823 | 2025-05-13 19:04:42.335929 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-05-13 19:04:42.362964 | orchestrator | skipping: Conditional result was False 2025-05-13 19:04:42.380934 | 2025-05-13 19:04:42.381085 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-05-13 19:04:42.998109 | orchestrator | changed 2025-05-13 19:04:43.007186 | 2025-05-13 19:04:43.007341 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-05-13 19:04:43.310636 | orchestrator | ok 2025-05-13 19:04:43.319871 | 2025-05-13 19:04:43.320001 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-05-13 19:04:43.959841 | orchestrator | ok 2025-05-13 19:04:43.969087 | 2025-05-13 19:04:43.969233 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-05-13 19:04:44.428750 | orchestrator | ok 2025-05-13 19:04:44.437488 | 2025-05-13 19:04:44.437615 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-05-13 19:04:44.462586 | orchestrator | skipping: Conditional result was False 2025-05-13 19:04:44.470146 | 2025-05-13 19:04:44.470263 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-05-13 19:04:44.953086 | orchestrator -> localhost | changed 2025-05-13 19:04:44.976765 | 2025-05-13 19:04:44.976915 | TASK [add-build-sshkey : Add back temp key] 2025-05-13 19:04:45.357873 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/221dbb57df2c4a04a6bf0721f15dc81e/work/221dbb57df2c4a04a6bf0721f15dc81e_id_rsa (zuul-build-sshkey) 2025-05-13 19:04:45.358205 | orchestrator -> localhost | ok: Runtime: 0:00:00.018638 2025-05-13 19:04:45.366513 | 2025-05-13 19:04:45.366651 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-05-13 19:04:45.801513 | orchestrator | ok 2025-05-13 19:04:45.810547 | 2025-05-13 19:04:45.810736 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-05-13 19:04:45.847860 | orchestrator | skipping: Conditional result was False 2025-05-13 19:04:45.917323 | 2025-05-13 19:04:45.917501 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-05-13 19:04:46.332612 | orchestrator | ok 2025-05-13 19:04:46.347617 | 2025-05-13 19:04:46.347795 | TASK [validate-host : Define zuul_info_dir fact] 2025-05-13 19:04:46.406945 | orchestrator | ok 2025-05-13 19:04:46.417913 | 2025-05-13 19:04:46.418056 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-05-13 19:04:46.738655 | orchestrator -> localhost | ok 2025-05-13 19:04:46.754284 | 2025-05-13 19:04:46.754452 | TASK [validate-host : Collect information about the host] 2025-05-13 19:04:48.015956 | orchestrator | ok 2025-05-13 19:04:48.033889 | 2025-05-13 19:04:48.034043 | TASK [validate-host : Sanitize hostname] 2025-05-13 19:04:48.102949 | orchestrator | ok 2025-05-13 19:04:48.111785 | 2025-05-13 19:04:48.111975 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-05-13 19:04:48.718897 | orchestrator -> localhost | changed 2025-05-13 19:04:48.733668 | 2025-05-13 19:04:48.733820 | TASK [validate-host : Collect information about zuul worker] 2025-05-13 19:04:49.204806 | orchestrator | ok 2025-05-13 19:04:49.213086 | 2025-05-13 19:04:49.213240 | TASK [validate-host : Write out all zuul information for each host] 2025-05-13 19:04:49.805281 | orchestrator -> localhost | changed 2025-05-13 19:04:49.826392 | 2025-05-13 19:04:49.826531 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-05-13 19:04:50.127004 | orchestrator | ok 2025-05-13 19:04:50.136316 | 2025-05-13 19:04:50.136446 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-05-13 19:05:09.014104 | orchestrator | changed: 2025-05-13 19:05:09.014435 | orchestrator | .d..t...... src/ 2025-05-13 19:05:09.014480 | orchestrator | .d..t...... src/github.com/ 2025-05-13 19:05:09.014512 | orchestrator | .d..t...... src/github.com/osism/ 2025-05-13 19:05:09.014540 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-05-13 19:05:09.014566 | orchestrator | RedHat.yml 2025-05-13 19:05:09.025994 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-05-13 19:05:09.026012 | orchestrator | RedHat.yml 2025-05-13 19:05:09.026064 | orchestrator | = 1.53.0"... 2025-05-13 19:05:21.559743 | orchestrator | 19:05:21.559 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-05-13 19:05:21.646329 | orchestrator | 19:05:21.646 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-05-13 19:05:23.090317 | orchestrator | 19:05:23.090 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-05-13 19:05:24.307647 | orchestrator | 19:05:24.307 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-05-13 19:05:25.269072 | orchestrator | 19:05:25.268 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-05-13 19:05:26.300623 | orchestrator | 19:05:26.300 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-05-13 19:05:27.786799 | orchestrator | 19:05:27.786 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-05-13 19:05:28.899663 | orchestrator | 19:05:28.899 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-05-13 19:05:28.899767 | orchestrator | 19:05:28.899 STDOUT terraform: Providers are signed by their developers. 2025-05-13 19:05:28.899780 | orchestrator | 19:05:28.899 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-05-13 19:05:28.899785 | orchestrator | 19:05:28.899 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-05-13 19:05:28.899942 | orchestrator | 19:05:28.899 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-05-13 19:05:28.900089 | orchestrator | 19:05:28.899 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-05-13 19:05:28.900224 | orchestrator | 19:05:28.900 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-05-13 19:05:28.900273 | orchestrator | 19:05:28.900 STDOUT terraform: you run "tofu init" in the future. 2025-05-13 19:05:28.900395 | orchestrator | 19:05:28.900 STDOUT terraform: OpenTofu has been successfully initialized! 2025-05-13 19:05:28.900549 | orchestrator | 19:05:28.900 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-05-13 19:05:28.900706 | orchestrator | 19:05:28.900 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-05-13 19:05:28.900805 | orchestrator | 19:05:28.900 STDOUT terraform: should now work. 2025-05-13 19:05:28.900949 | orchestrator | 19:05:28.900 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-05-13 19:05:28.901079 | orchestrator | 19:05:28.900 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-05-13 19:05:28.901188 | orchestrator | 19:05:28.901 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-05-13 19:05:29.141859 | orchestrator | 19:05:29.141 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-05-13 19:05:29.349504 | orchestrator | 19:05:29.349 STDOUT terraform: Created and switched to workspace "ci"! 2025-05-13 19:05:29.349611 | orchestrator | 19:05:29.349 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-05-13 19:05:29.349641 | orchestrator | 19:05:29.349 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-05-13 19:05:29.349654 | orchestrator | 19:05:29.349 STDOUT terraform: for this configuration. 2025-05-13 19:05:29.602231 | orchestrator | 19:05:29.601 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-05-13 19:05:29.729899 | orchestrator | 19:05:29.729 STDOUT terraform: ci.auto.tfvars 2025-05-13 19:05:29.735719 | orchestrator | 19:05:29.735 STDOUT terraform: default_custom.tf 2025-05-13 19:05:29.931613 | orchestrator | 19:05:29.931 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-05-13 19:05:30.926686 | orchestrator | 19:05:30.926 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-05-13 19:05:31.474950 | orchestrator | 19:05:31.474 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-05-13 19:05:31.660394 | orchestrator | 19:05:31.659 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-05-13 19:05:31.660498 | orchestrator | 19:05:31.660 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-05-13 19:05:31.660526 | orchestrator | 19:05:31.660 STDOUT terraform:  + create 2025-05-13 19:05:31.660540 | orchestrator | 19:05:31.660 STDOUT terraform:  <= read (data resources) 2025-05-13 19:05:31.660552 | orchestrator | 19:05:31.660 STDOUT terraform: OpenTofu will perform the following actions: 2025-05-13 19:05:31.660603 | orchestrator | 19:05:31.660 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-05-13 19:05:31.660747 | orchestrator | 19:05:31.660 STDOUT terraform:  # (config refers to values not yet known) 2025-05-13 19:05:31.660853 | orchestrator | 19:05:31.660 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-05-13 19:05:31.660934 | orchestrator | 19:05:31.660 STDOUT terraform:  + checksum = (known after apply) 2025-05-13 19:05:31.660985 | orchestrator | 19:05:31.660 STDOUT terraform:  + created_at = (known after apply) 2025-05-13 19:05:31.661073 | orchestrator | 19:05:31.660 STDOUT terraform:  + file = (known after apply) 2025-05-13 19:05:31.661166 | orchestrator | 19:05:31.661 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.661218 | orchestrator | 19:05:31.661 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 19:05:31.661317 | orchestrator | 19:05:31.661 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-13 19:05:31.661421 | orchestrator | 19:05:31.661 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-13 19:05:31.661525 | orchestrator | 19:05:31.661 STDOUT terraform:  + most_recent = true 2025-05-13 19:05:31.661622 | orchestrator | 19:05:31.661 STDOUT terraform:  + name = (known after apply) 2025-05-13 19:05:31.661755 | orchestrator | 19:05:31.661 STDOUT terraform:  + protected = (known after apply) 2025-05-13 19:05:31.661856 | orchestrator | 19:05:31.661 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.662062 | orchestrator | 19:05:31.661 STDOUT terraform:  + schema = (known after apply) 2025-05-13 19:05:31.662208 | orchestrator | 19:05:31.662 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-13 19:05:31.662434 | orchestrator | 19:05:31.662 STDOUT terraform:  + tags = (known after apply) 2025-05-13 19:05:31.662453 | orchestrator | 19:05:31.662 STDOUT terraform:  + updated_at = (known after apply) 2025-05-13 19:05:31.662469 | orchestrator | 19:05:31.662 STDOUT terraform:  } 2025-05-13 19:05:31.662649 | orchestrator | 19:05:31.662 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-05-13 19:05:31.662667 | orchestrator | 19:05:31.662 STDOUT terraform:  # (config refers to values not yet known) 2025-05-13 19:05:31.662779 | orchestrator | 19:05:31.662 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-05-13 19:05:31.662855 | orchestrator | 19:05:31.662 STDOUT terraform:  + checksum = (known after apply) 2025-05-13 19:05:31.662931 | orchestrator | 19:05:31.662 STDOUT terraform:  + created_at = (known after apply) 2025-05-13 19:05:31.662986 | orchestrator | 19:05:31.662 STDOUT terraform:  + file = (known after apply) 2025-05-13 19:05:31.663062 | orchestrator | 19:05:31.662 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.663157 | orchestrator | 19:05:31.663 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 19:05:31.663203 | orchestrator | 19:05:31.663 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-13 19:05:31.663274 | orchestrator | 19:05:31.663 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-13 19:05:31.663349 | orchestrator | 19:05:31.663 STDOUT terraform:  + most_recent = true 2025-05-13 19:05:31.663396 | orchestrator | 19:05:31.663 STDOUT terraform:  + name = (known after apply) 2025-05-13 19:05:31.663467 | orchestrator | 19:05:31.663 STDOUT terraform:  + protected = (known after apply) 2025-05-13 19:05:31.663542 | orchestrator | 19:05:31.663 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.663640 | orchestrator | 19:05:31.663 STDOUT terraform:  + schema = (known after apply) 2025-05-13 19:05:31.663737 | orchestrator | 19:05:31.663 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-13 19:05:31.663778 | orchestrator | 19:05:31.663 STDOUT terraform:  + tags = (known after apply) 2025-05-13 19:05:31.663868 | orchestrator | 19:05:31.663 STDOUT terraform:  + updated_at = (known after apply) 2025-05-13 19:05:31.663910 | orchestrator | 19:05:31.663 STDOUT terraform:  } 2025-05-13 19:05:31.663984 | orchestrator | 19:05:31.663 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-05-13 19:05:31.664059 | orchestrator | 19:05:31.663 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-05-13 19:05:31.664149 | orchestrator | 19:05:31.664 STDOUT terraform:  + content = (known after apply) 2025-05-13 19:05:31.664339 | orchestrator | 19:05:31.664 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-13 19:05:31.664371 | orchestrator | 19:05:31.664 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-13 19:05:31.664434 | orchestrator | 19:05:31.664 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-13 19:05:31.664522 | orchestrator | 19:05:31.664 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-13 19:05:31.664613 | orchestrator | 19:05:31.664 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-13 19:05:31.664831 | orchestrator | 19:05:31.664 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-13 19:05:31.664903 | orchestrator | 19:05:31.664 STDOUT terraform:  + directory_permission = "0777" 2025-05-13 19:05:31.664960 | orchestrator | 19:05:31.664 STDOUT terraform:  + file_permission = "0644" 2025-05-13 19:05:31.665049 | orchestrator | 19:05:31.664 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-05-13 19:05:31.665106 | orchestrator | 19:05:31.665 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.665137 | orchestrator | 19:05:31.665 STDOUT terraform:  } 2025-05-13 19:05:31.665191 | orchestrator | 19:05:31.665 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-05-13 19:05:31.665263 | orchestrator | 19:05:31.665 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-05-13 19:05:31.665316 | orchestrator | 19:05:31.665 STDOUT terraform:  + content = (known after apply) 2025-05-13 19:05:31.665389 | orchestrator | 19:05:31.665 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-13 19:05:31.665462 | orchestrator | 19:05:31.665 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-13 19:05:31.665550 | orchestrator | 19:05:31.665 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-13 19:05:31.665607 | orchestrator | 19:05:31.665 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-13 19:05:31.665681 | orchestrator | 19:05:31.665 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-13 19:05:31.665774 | orchestrator | 19:05:31.665 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-13 19:05:31.665824 | orchestrator | 19:05:31.665 STDOUT terraform:  + directory_permission = "0777" 2025-05-13 19:05:31.665879 | orchestrator | 19:05:31.665 STDOUT terraform:  + file_permission = "0644" 2025-05-13 19:05:31.665943 | orchestrator | 19:05:31.665 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-05-13 19:05:31.666046 | orchestrator | 19:05:31.665 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.666063 | orchestrator | 19:05:31.666 STDOUT terraform:  } 2025-05-13 19:05:31.666117 | orchestrator | 19:05:31.666 STDOUT terraform:  # local_file.inventory will be created 2025-05-13 19:05:31.666168 | orchestrator | 19:05:31.666 STDOUT terraform:  + resource "local_file" "inventory" { 2025-05-13 19:05:31.666266 | orchestrator | 19:05:31.666 STDOUT terraform:  + content = (known after apply) 2025-05-13 19:05:31.666312 | orchestrator | 19:05:31.666 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-13 19:05:31.666385 | orchestrator | 19:05:31.666 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-13 19:05:31.666457 | orchestrator | 19:05:31.666 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-13 19:05:31.666537 | orchestrator | 19:05:31.666 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-13 19:05:31.666602 | orchestrator | 19:05:31.666 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-13 19:05:31.666676 | orchestrator | 19:05:31.666 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-13 19:05:31.666745 | orchestrator | 19:05:31.666 STDOUT terraform:  + directory_permission = "0777" 2025-05-13 19:05:31.666792 | orchestrator | 19:05:31.666 STDOUT terraform:  + file_permission = "0644" 2025-05-13 19:05:31.666854 | orchestrator | 19:05:31.666 STDOUT terraform:  + filename = "inventory.ci" 2025-05-13 19:05:31.666927 | orchestrator | 19:05:31.666 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.666942 | orchestrator | 19:05:31.666 STDOUT terraform:  } 2025-05-13 19:05:31.667007 | orchestrator | 19:05:31.666 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-05-13 19:05:31.667066 | orchestrator | 19:05:31.667 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-05-13 19:05:31.667129 | orchestrator | 19:05:31.667 STDOUT terraform:  + content = (sensitive value) 2025-05-13 19:05:31.667208 | orchestrator | 19:05:31.667 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-13 19:05:31.667280 | orchestrator | 19:05:31.667 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-13 19:05:31.667372 | orchestrator | 19:05:31.667 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-13 19:05:31.667422 | orchestrator | 19:05:31.667 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-13 19:05:31.667489 | orchestrator | 19:05:31.667 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-13 19:05:31.667559 | orchestrator | 19:05:31.667 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-13 19:05:31.667602 | orchestrator | 19:05:31.667 STDOUT terraform:  + directory_permission = "0700" 2025-05-13 19:05:31.667649 | orchestrator | 19:05:31.667 STDOUT terraform:  + file_permission = "0600" 2025-05-13 19:05:31.667738 | orchestrator | 19:05:31.667 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-05-13 19:05:31.667793 | orchestrator | 19:05:31.667 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.667807 | orchestrator | 19:05:31.667 STDOUT terraform:  } 2025-05-13 19:05:31.667872 | orchestrator | 19:05:31.667 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-05-13 19:05:31.667933 | orchestrator | 19:05:31.667 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-05-13 19:05:31.667975 | orchestrator | 19:05:31.667 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.668004 | orchestrator | 19:05:31.667 STDOUT terraform:  } 2025-05-13 19:05:31.668104 | orchestrator | 19:05:31.667 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-05-13 19:05:31.668198 | orchestrator | 19:05:31.668 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-05-13 19:05:31.668261 | orchestrator | 19:05:31.668 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 19:05:31.668303 | orchestrator | 19:05:31.668 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 19:05:31.668367 | orchestrator | 19:05:31.668 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.668428 | orchestrator | 19:05:31.668 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 19:05:31.668491 | orchestrator | 19:05:31.668 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 19:05:31.668570 | orchestrator | 19:05:31.668 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-05-13 19:05:31.668641 | orchestrator | 19:05:31.668 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.668673 | orchestrator | 19:05:31.668 STDOUT terraform:  + size = 80 2025-05-13 19:05:31.668776 | orchestrator | 19:05:31.668 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 19:05:31.668792 | orchestrator | 19:05:31.668 STDOUT terraform:  } 2025-05-13 19:05:31.668893 | orchestrator | 19:05:31.668 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-05-13 19:05:31.668987 | orchestrator | 19:05:31.668 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-13 19:05:31.669064 | orchestrator | 19:05:31.668 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 19:05:31.669089 | orchestrator | 19:05:31.669 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 19:05:31.669148 | orchestrator | 19:05:31.669 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.669210 | orchestrator | 19:05:31.669 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 19:05:31.669274 | orchestrator | 19:05:31.669 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 19:05:31.669355 | orchestrator | 19:05:31.669 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-05-13 19:05:31.669418 | orchestrator | 19:05:31.669 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.669459 | orchestrator | 19:05:31.669 STDOUT terraform:  + size = 80 2025-05-13 19:05:31.669501 | orchestrator | 19:05:31.669 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 19:05:31.669528 | orchestrator | 19:05:31.669 STDOUT terraform:  } 2025-05-13 19:05:31.669624 | orchestrator | 19:05:31.669 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-05-13 19:05:31.669814 | orchestrator | 19:05:31.669 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-13 19:05:31.669832 | orchestrator | 19:05:31.669 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 19:05:31.669846 | orchestrator | 19:05:31.669 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 19:05:31.669899 | orchestrator | 19:05:31.669 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.669952 | orchestrator | 19:05:31.669 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 19:05:31.670014 | orchestrator | 19:05:31.669 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 19:05:31.673157 | orchestrator | 19:05:31.670 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-05-13 19:05:31.673219 | orchestrator | 19:05:31.673 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.673305 | orchestrator | 19:05:31.673 STDOUT terraform:  + size = 80 2025-05-13 19:05:31.673343 | orchestrator | 19:05:31.673 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 19:05:31.673355 | orchestrator | 19:05:31.673 STDOUT terraform:  } 2025-05-13 19:05:31.673581 | orchestrator | 19:05:31.673 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-05-13 19:05:31.673648 | orchestrator | 19:05:31.673 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-13 19:05:31.673724 | orchestrator | 19:05:31.673 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 19:05:31.673764 | orchestrator | 19:05:31.673 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 19:05:31.673828 | orchestrator | 19:05:31.673 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.673891 | orchestrator | 19:05:31.673 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 19:05:31.673953 | orchestrator | 19:05:31.673 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 19:05:31.674057 | orchestrator | 19:05:31.673 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-05-13 19:05:31.674121 | orchestrator | 19:05:31.674 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.674163 | orchestrator | 19:05:31.674 STDOUT terraform:  + size = 80 2025-05-13 19:05:31.674206 | orchestrator | 19:05:31.674 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 19:05:31.674236 | orchestrator | 19:05:31.674 STDOUT terraform:  } 2025-05-13 19:05:31.674330 | orchestrator | 19:05:31.674 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-05-13 19:05:31.674422 | orchestrator | 19:05:31.674 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-13 19:05:31.674487 | orchestrator | 19:05:31.674 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 19:05:31.674529 | orchestrator | 19:05:31.674 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 19:05:31.674591 | orchestrator | 19:05:31.674 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.674653 | orchestrator | 19:05:31.674 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 19:05:31.674763 | orchestrator | 19:05:31.674 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 19:05:31.674844 | orchestrator | 19:05:31.674 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-05-13 19:05:31.674908 | orchestrator | 19:05:31.674 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.674951 | orchestrator | 19:05:31.674 STDOUT terraform:  + size = 80 2025-05-13 19:05:31.674994 | orchestrator | 19:05:31.674 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 19:05:31.675023 | orchestrator | 19:05:31.674 STDOUT terraform:  } 2025-05-13 19:05:31.675120 | orchestrator | 19:05:31.675 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-05-13 19:05:31.675201 | orchestrator | 19:05:31.675 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-13 19:05:31.675255 | orchestrator | 19:05:31.675 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 19:05:31.675291 | orchestrator | 19:05:31.675 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 19:05:31.675344 | orchestrator | 19:05:31.675 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.675397 | orchestrator | 19:05:31.675 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 19:05:31.675452 | orchestrator | 19:05:31.675 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 19:05:31.675519 | orchestrator | 19:05:31.675 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-05-13 19:05:31.675572 | orchestrator | 19:05:31.675 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.675608 | orchestrator | 19:05:31.675 STDOUT terraform:  + size = 80 2025-05-13 19:05:31.675649 | orchestrator | 19:05:31.675 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 19:05:31.675663 | orchestrator | 19:05:31.675 STDOUT terraform:  } 2025-05-13 19:05:31.675772 | orchestrator | 19:05:31.675 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-05-13 19:05:31.675837 | orchestrator | 19:05:31.675 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-13 19:05:31.675889 | orchestrator | 19:05:31.675 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 19:05:31.675932 | orchestrator | 19:05:31.675 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 19:05:31.675979 | orchestrator | 19:05:31.675 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.676031 | orchestrator | 19:05:31.675 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 19:05:31.676084 | orchestrator | 19:05:31.676 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 19:05:31.676152 | orchestrator | 19:05:31.676 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-05-13 19:05:31.676207 | orchestrator | 19:05:31.676 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.676242 | orchestrator | 19:05:31.676 STDOUT terraform:  + size = 80 2025-05-13 19:05:31.676278 | orchestrator | 19:05:31.676 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 19:05:31.676302 | orchestrator | 19:05:31.676 STDOUT terraform:  } 2025-05-13 19:05:31.676378 | orchestrator | 19:05:31.676 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-05-13 19:05:31.676456 | orchestrator | 19:05:31.676 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-13 19:05:31.676502 | orchestrator | 19:05:31.676 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 19:05:31.676541 | orchestrator | 19:05:31.676 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 19:05:31.676595 | orchestrator | 19:05:31.676 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.676648 | orchestrator | 19:05:31.676 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 19:05:31.676727 | orchestrator | 19:05:31.676 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-05-13 19:05:31.676779 | orchestrator | 19:05:31.676 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.676811 | orchestrator | 19:05:31.676 STDOUT terraform:  + size = 20 2025-05-13 19:05:31.676843 | orchestrator | 19:05:31.676 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 19:05:31.676853 | orchestrator | 19:05:31.676 STDOUT terraform:  } 2025-05-13 19:05:31.676937 | orchestrator | 19:05:31.676 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-05-13 19:05:31.677010 | orchestrator | 19:05:31.676 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-13 19:05:31.677062 | orchestrator | 19:05:31.677 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 19:05:31.677097 | orchestrator | 19:05:31.677 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 19:05:31.677151 | orchestrator | 19:05:31.677 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.677204 | orchestrator | 19:05:31.677 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 19:05:31.677268 | orchestrator | 19:05:31.677 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-05-13 19:05:31.677322 | orchestrator | 19:05:31.677 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.677360 | orchestrator | 19:05:31.677 STDOUT terraform:  + size = 20 2025-05-13 19:05:31.677400 | orchestrator | 19:05:31.677 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 19:05:31.677410 | orchestrator | 19:05:31.677 STDOUT terraform:  } 2025-05-13 19:05:31.677488 | orchestrator | 19:05:31.677 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-05-13 19:05:31.677564 | orchestrator | 19:05:31.677 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-13 19:05:31.677618 | orchestrator | 19:05:31.677 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 19:05:31.677653 | orchestrator | 19:05:31.677 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 19:05:31.677744 | orchestrator | 19:05:31.677 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.677779 | orchestrator | 19:05:31.677 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 19:05:31.677838 | orchestrator | 19:05:31.677 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-05-13 19:05:31.677892 | orchestrator | 19:05:31.677 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.677927 | orchestrator | 19:05:31.677 STDOUT terraform:  + size = 20 2025-05-13 19:05:31.677962 | orchestrator | 19:05:31.677 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 19:05:31.677971 | orchestrator | 19:05:31.677 STDOUT terraform:  } 2025-05-13 19:05:31.678082 | orchestrator | 19:05:31.677 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-05-13 19:05:31.678155 | orchestrator | 19:05:31.678 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-13 19:05:31.678207 | orchestrator | 19:05:31.678 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 19:05:31.678243 | orchestrator | 19:05:31.678 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 19:05:31.678297 | orchestrator | 19:05:31.678 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.678351 | orchestrator | 19:05:31.678 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 19:05:31.678414 | orchestrator | 19:05:31.678 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-05-13 19:05:31.678469 | orchestrator | 19:05:31.678 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.678507 | orchestrator | 19:05:31.678 STDOUT terraform:  + size = 20 2025-05-13 19:05:31.678543 | orchestrator | 19:05:31.678 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 19:05:31.678565 | orchestrator | 19:05:31.678 STDOUT terraform:  } 2025-05-13 19:05:31.678643 | orchestrator | 19:05:31.678 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-05-13 19:05:31.678728 | orchestrator | 19:05:31.678 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-13 19:05:31.678782 | orchestrator | 19:05:31.678 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 19:05:31.678818 | orchestrator | 19:05:31.678 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 19:05:31.678872 | orchestrator | 19:05:31.678 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.678924 | orchestrator | 19:05:31.678 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 19:05:31.678989 | orchestrator | 19:05:31.678 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-05-13 19:05:31.679042 | orchestrator | 19:05:31.678 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.679082 | orchestrator | 19:05:31.679 STDOUT terraform:  + size = 20 2025-05-13 19:05:31.679113 | orchestrator | 19:05:31.679 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 19:05:31.679136 | orchestrator | 19:05:31.679 STDOUT terraform:  } 2025-05-13 19:05:31.679211 | orchestrator | 19:05:31.679 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-05-13 19:05:31.679285 | orchestrator | 19:05:31.679 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-13 19:05:31.679350 | orchestrator | 19:05:31.679 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 19:05:31.679387 | orchestrator | 19:05:31.679 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 19:05:31.679440 | orchestrator | 19:05:31.679 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.679494 | orchestrator | 19:05:31.679 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 19:05:31.679558 | orchestrator | 19:05:31.679 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-05-13 19:05:31.679612 | orchestrator | 19:05:31.679 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.679646 | orchestrator | 19:05:31.679 STDOUT terraform:  + size = 20 2025-05-13 19:05:31.679682 | orchestrator | 19:05:31.679 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 19:05:31.679875 | orchestrator | 19:05:31.679 STDOUT terraform:  } 2025-05-13 19:05:31.679981 | orchestrator | 19:05:31.679 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-05-13 19:05:31.679999 | orchestrator | 19:05:31.679 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-13 19:05:31.680040 | orchestrator | 19:05:31.679 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 19:05:31.680052 | orchestrator | 19:05:31.679 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 19:05:31.680067 | orchestrator | 19:05:31.679 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.680078 | orchestrator | 19:05:31.680 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 19:05:31.680141 | orchestrator | 19:05:31.680 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-05-13 19:05:31.680188 | orchestrator | 19:05:31.680 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.680215 | orchestrator | 19:05:31.680 STDOUT terraform:  + size = 20 2025-05-13 19:05:31.680252 | orchestrator | 19:05:31.680 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 19:05:31.680268 | orchestrator | 19:05:31.680 STDOUT terraform:  } 2025-05-13 19:05:31.680339 | orchestrator | 19:05:31.680 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-05-13 19:05:31.680409 | orchestrator | 19:05:31.680 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-13 19:05:31.680458 | orchestrator | 19:05:31.680 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 19:05:31.680485 | orchestrator | 19:05:31.680 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 19:05:31.680539 | orchestrator | 19:05:31.680 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.680588 | orchestrator | 19:05:31.680 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 19:05:31.680652 | orchestrator | 19:05:31.680 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-05-13 19:05:31.680722 | orchestrator | 19:05:31.680 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.680739 | orchestrator | 19:05:31.680 STDOUT terraform:  + size = 20 2025-05-13 19:05:31.680775 | orchestrator | 19:05:31.680 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 19:05:31.680791 | orchestrator | 19:05:31.680 STDOUT terraform:  } 2025-05-13 19:05:31.680863 | orchestrator | 19:05:31.680 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-05-13 19:05:31.680947 | orchestrator | 19:05:31.680 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-13 19:05:31.680964 | orchestrator | 19:05:31.680 STDOUT terraform:  + attachment = (known after apply) 2025-05-13 19:05:31.681009 | orchestrator | 19:05:31.680 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 19:05:31.681116 | orchestrator | 19:05:31.680 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.681132 | orchestrator | 19:05:31.681 STDOUT terraform:  + metadata = (known after apply) 2025-05-13 19:05:31.681184 | orchestrator | 19:05:31.681 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-05-13 19:05:31.681249 | orchestrator | 19:05:31.681 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.681300 | orchestrator | 19:05:31.681 STDOUT terraform:  + size = 20 2025-05-13 19:05:31.681354 | orchestrator | 19:05:31.681 STDOUT terraform:  + volume_type = "ssd" 2025-05-13 19:05:31.681381 | orchestrator | 19:05:31.681 STDOUT terraform:  } 2025-05-13 19:05:31.681447 | orchestrator | 19:05:31.681 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-05-13 19:05:31.681517 | orchestrator | 19:05:31.681 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-05-13 19:05:31.681573 | orchestrator | 19:05:31.681 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-13 19:05:31.681629 | orchestrator | 19:05:31.681 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-13 19:05:31.681686 | orchestrator | 19:05:31.681 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-13 19:05:31.681756 | orchestrator | 19:05:31.681 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 19:05:31.681809 | orchestrator | 19:05:31.681 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 19:05:31.681825 | orchestrator | 19:05:31.681 STDOUT terraform:  + config_drive = true 2025-05-13 19:05:31.681881 | orchestrator | 19:05:31.681 STDOUT terraform:  + created = (known after apply) 2025-05-13 19:05:31.681938 | orchestrator | 19:05:31.681 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-13 19:05:31.681990 | orchestrator | 19:05:31.681 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-05-13 19:05:31.682034 | orchestrator | 19:05:31.681 STDOUT terraform:  + force_delete = false 2025-05-13 19:05:31.682102 | orchestrator | 19:05:31.682 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.682162 | orchestrator | 19:05:31.682 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 19:05:31.682220 | orchestrator | 19:05:31.682 STDOUT terraform:  + image_name = (known after apply) 2025-05-13 19:05:31.682260 | orchestrator | 19:05:31.682 STDOUT terraform:  + key_pair = "testbed" 2025-05-13 19:05:31.682299 | orchestrator | 19:05:31.682 STDOUT terraform:  + name = "testbed-manager" 2025-05-13 19:05:31.682339 | orchestrator | 19:05:31.682 STDOUT terraform:  + power_state = "active" 2025-05-13 19:05:31.682396 | orchestrator | 19:05:31.682 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.682453 | orchestrator | 19:05:31.682 STDOUT terraform:  + security_groups = (known after apply) 2025-05-13 19:05:31.682470 | orchestrator | 19:05:31.682 STDOUT terraform:  + stop_before_destroy = false 2025-05-13 19:05:31.682542 | orchestrator | 19:05:31.682 STDOUT terraform:  + updated = (known after apply) 2025-05-13 19:05:31.682599 | orchestrator | 19:05:31.682 STDOUT terraform:  + user_data = (known after apply) 2025-05-13 19:05:31.682615 | orchestrator | 19:05:31.682 STDOUT terraform:  + block_device { 2025-05-13 19:05:31.682654 | orchestrator | 19:05:31.682 STDOUT terraform:  + boot_index = 0 2025-05-13 19:05:31.682722 | orchestrator | 19:05:31.682 STDOUT terraform:  + delete_on_termination = false 2025-05-13 19:05:31.682739 | orchestrator | 19:05:31.682 STDOUT terraform:  + destination_type = "volume" 2025-05-13 19:05:31.682793 | orchestrator | 19:05:31.682 STDOUT terraform:  + multiattach = false 2025-05-13 19:05:31.682842 | orchestrator | 19:05:31.682 STDOUT terraform:  + source_type = "volume" 2025-05-13 19:05:31.682907 | orchestrator | 19:05:31.682 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 19:05:31.682924 | orchestrator | 19:05:31.682 STDOUT terraform:  } 2025-05-13 19:05:31.682946 | orchestrator | 19:05:31.682 STDOUT terraform:  + network { 2025-05-13 19:05:31.682961 | orchestrator | 19:05:31.682 STDOUT terraform:  + access_network = false 2025-05-13 19:05:31.683019 | orchestrator | 19:05:31.682 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-13 19:05:31.683068 | orchestrator | 19:05:31.683 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-13 19:05:31.683119 | orchestrator | 19:05:31.683 STDOUT terraform:  + mac = (known after apply) 2025-05-13 19:05:31.683169 | orchestrator | 19:05:31.683 STDOUT terraform:  + name = (known after apply) 2025-05-13 19:05:31.683223 | orchestrator | 19:05:31.683 STDOUT terraform:  + port = (known after apply) 2025-05-13 19:05:31.683280 | orchestrator | 19:05:31.683 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 19:05:31.683297 | orchestrator | 19:05:31.683 STDOUT terraform:  } 2025-05-13 19:05:31.683336 | orchestrator | 19:05:31.683 STDOUT terraform:  } 2025-05-13 19:05:31.683414 | orchestrator | 19:05:31.683 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-05-13 19:05:31.683482 | orchestrator | 19:05:31.683 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-13 19:05:31.683546 | orchestrator | 19:05:31.683 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-13 19:05:31.683598 | orchestrator | 19:05:31.683 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-13 19:05:31.683655 | orchestrator | 19:05:31.683 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-13 19:05:31.683747 | orchestrator | 19:05:31.683 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 19:05:31.683764 | orchestrator | 19:05:31.683 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 19:05:31.683812 | orchestrator | 19:05:31.683 STDOUT terraform:  + config_drive = true 2025-05-13 19:05:31.683868 | orchestrator | 19:05:31.683 STDOUT terraform:  + created = (known after apply) 2025-05-13 19:05:31.683925 | orchestrator | 19:05:31.683 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-13 19:05:31.683966 | orchestrator | 19:05:31.683 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-13 19:05:31.684006 | orchestrator | 19:05:31.683 STDOUT terraform:  + force_delete = false 2025-05-13 19:05:31.684055 | orchestrator | 19:05:31.683 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.684104 | orchestrator | 19:05:31.684 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 19:05:31.684145 | orchestrator | 19:05:31.684 STDOUT terraform:  + image_name = (known after apply) 2025-05-13 19:05:31.684195 | orchestrator | 19:05:31.684 STDOUT terraform:  + key_pair = "testbed" 2025-05-13 19:05:31.684211 | orchestrator | 19:05:31.684 STDOUT terraform:  + name = "testbed-node-0" 2025-05-13 19:05:31.684249 | orchestrator | 19:05:31.684 STDOUT terraform:  + power_state = "active" 2025-05-13 19:05:31.684300 | orchestrator | 19:05:31.684 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.684351 | orchestrator | 19:05:31.684 STDOUT terraform:  + security_groups = (known after apply) 2025-05-13 19:05:31.684368 | orchestrator | 19:05:31.684 STDOUT terraform:  + stop_before_destroy = false 2025-05-13 19:05:31.684428 | orchestrator | 19:05:31.684 STDOUT terraform:  + updated = (known after apply) 2025-05-13 19:05:31.684500 | orchestrator | 19:05:31.684 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-13 19:05:31.684516 | orchestrator | 19:05:31.684 STDOUT terraform:  + block_device { 2025-05-13 19:05:31.684532 | orchestrator | 19:05:31.684 STDOUT terraform:  + boot_index = 0 2025-05-13 19:05:31.684588 | orchestrator | 19:05:31.684 STDOUT terraform:  + delete_on_termination = false 2025-05-13 19:05:31.684640 | orchestrator | 19:05:31.684 STDOUT terraform:  + destination_type = "volume" 2025-05-13 19:05:31.684656 | orchestrator | 19:05:31.684 STDOUT terraform:  + multiattach = false 2025-05-13 19:05:31.684721 | orchestrator | 19:05:31.684 STDOUT terraform:  + source_type = "volume" 2025-05-13 19:05:31.684763 | orchestrator | 19:05:31.684 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 19:05:31.684780 | orchestrator | 19:05:31.684 STDOUT terraform:  } 2025-05-13 19:05:31.684791 | orchestrator | 19:05:31.684 STDOUT terraform:  + network { 2025-05-13 19:05:31.684806 | orchestrator | 19:05:31.684 STDOUT terraform:  + access_network = false 2025-05-13 19:05:31.684860 | orchestrator | 19:05:31.684 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-13 19:05:31.684901 | orchestrator | 19:05:31.684 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-13 19:05:31.684941 | orchestrator | 19:05:31.684 STDOUT terraform:  + mac = (known after apply) 2025-05-13 19:05:31.684986 | orchestrator | 19:05:31.684 STDOUT terraform:  + name = (known after apply) 2025-05-13 19:05:31.685038 | orchestrator | 19:05:31.684 STDOUT terraform:  + port = (known after apply) 2025-05-13 19:05:31.685055 | orchestrator | 19:05:31.685 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 19:05:31.685069 | orchestrator | 19:05:31.685 STDOUT terraform:  } 2025-05-13 19:05:31.685084 | orchestrator | 19:05:31.685 STDOUT terraform:  } 2025-05-13 19:05:31.685153 | orchestrator | 19:05:31.685 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-05-13 19:05:31.685212 | orchestrator | 19:05:31.685 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-13 19:05:31.685261 | orchestrator | 19:05:31.685 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-13 19:05:31.685312 | orchestrator | 19:05:31.685 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-13 19:05:31.685352 | orchestrator | 19:05:31.685 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-13 19:05:31.685407 | orchestrator | 19:05:31.685 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 19:05:31.685423 | orchestrator | 19:05:31.685 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 19:05:31.685445 | orchestrator | 19:05:31.685 STDOUT terraform:  + config_drive = true 2025-05-13 19:05:31.685508 | orchestrator | 19:05:31.685 STDOUT terraform:  + created = (known after apply) 2025-05-13 19:05:31.685560 | orchestrator | 19:05:31.685 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-13 19:05:31.685612 | orchestrator | 19:05:31.685 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-13 19:05:31.685627 | orchestrator | 19:05:31.685 STDOUT terraform:  + force_delete = false 2025-05-13 19:05:31.685678 | orchestrator | 19:05:31.685 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.685850 | orchestrator | 19:05:31.685 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 19:05:31.685887 | orchestrator | 19:05:31.685 STDOUT terraform:  + image_name = (known after apply) 2025-05-13 19:05:31.685900 | orchestrator | 19:05:31.685 STDOUT terraform:  + key_pair = "testbed" 2025-05-13 19:05:31.685906 | orchestrator | 19:05:31.685 STDOUT terraform:  + name = "testbed-node-1" 2025-05-13 19:05:31.685912 | orchestrator | 19:05:31.685 STDOUT terraform:  + power_state = "active" 2025-05-13 19:05:31.685946 | orchestrator | 19:05:31.685 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.685982 | orchestrator | 19:05:31.685 STDOUT terraform:  + security_groups = (known after apply) 2025-05-13 19:05:31.686032 | orchestrator | 19:05:31.685 STDOUT terraform:  + stop_before_destroy = false 2025-05-13 19:05:31.686071 | orchestrator | 19:05:31.686 STDOUT terraform:  + updated = (known after apply) 2025-05-13 19:05:31.686136 | orchestrator | 19:05:31.686 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-13 19:05:31.686147 | orchestrator | 19:05:31.686 STDOUT terraform:  + block_device { 2025-05-13 19:05:31.686182 | orchestrator | 19:05:31.686 STDOUT terraform:  + boot_index = 0 2025-05-13 19:05:31.686218 | orchestrator | 19:05:31.686 STDOUT terraform:  + delete_on_termination = false 2025-05-13 19:05:31.686255 | orchestrator | 19:05:31.686 STDOUT terraform:  + destination_type = "volume" 2025-05-13 19:05:31.686292 | orchestrator | 19:05:31.686 STDOUT terraform:  + multiattach = false 2025-05-13 19:05:31.686330 | orchestrator | 19:05:31.686 STDOUT terraform:  + source_type = "volume" 2025-05-13 19:05:31.686380 | orchestrator | 19:05:31.686 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 19:05:31.686390 | orchestrator | 19:05:31.686 STDOUT terraform:  } 2025-05-13 19:05:31.686405 | orchestrator | 19:05:31.686 STDOUT terraform:  + network { 2025-05-13 19:05:31.686433 | orchestrator | 19:05:31.686 STDOUT terraform:  + access_network = false 2025-05-13 19:05:31.686473 | orchestrator | 19:05:31.686 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-13 19:05:31.686512 | orchestrator | 19:05:31.686 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-13 19:05:31.686554 | orchestrator | 19:05:31.686 STDOUT terraform:  + mac = (known after apply) 2025-05-13 19:05:31.686595 | orchestrator | 19:05:31.686 STDOUT terraform:  + name = (known after apply) 2025-05-13 19:05:31.686674 | orchestrator | 19:05:31.686 STDOUT terraform:  + port = (known after apply) 2025-05-13 19:05:31.686760 | orchestrator | 19:05:31.686 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 19:05:31.686768 | orchestrator | 19:05:31.686 STDOUT terraform:  } 2025-05-13 19:05:31.686774 | orchestrator | 19:05:31.686 STDOUT terraform:  } 2025-05-13 19:05:31.686782 | orchestrator | 19:05:31.686 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-05-13 19:05:31.686842 | orchestrator | 19:05:31.686 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-13 19:05:31.686887 | orchestrator | 19:05:31.686 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-13 19:05:31.686931 | orchestrator | 19:05:31.686 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-13 19:05:31.686975 | orchestrator | 19:05:31.686 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-13 19:05:31.687021 | orchestrator | 19:05:31.686 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 19:05:31.687051 | orchestrator | 19:05:31.687 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 19:05:31.687061 | orchestrator | 19:05:31.687 STDOUT terraform:  + config_drive = true 2025-05-13 19:05:31.687115 | orchestrator | 19:05:31.687 STDOUT terraform:  + created = (known after apply) 2025-05-13 19:05:31.687160 | orchestrator | 19:05:31.687 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-13 19:05:31.687200 | orchestrator | 19:05:31.687 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-13 19:05:31.687226 | orchestrator | 19:05:31.687 STDOUT terraform:  + force_delete = false 2025-05-13 19:05:31.687270 | orchestrator | 19:05:31.687 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.687314 | orchestrator | 19:05:31.687 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 19:05:31.687359 | orchestrator | 19:05:31.687 STDOUT terraform:  + image_name = (known after apply) 2025-05-13 19:05:31.687391 | orchestrator | 19:05:31.687 STDOUT terraform:  + key_pair = "testbed" 2025-05-13 19:05:31.687432 | orchestrator | 19:05:31.687 STDOUT terraform:  + name = "testbed-node-2" 2025-05-13 19:05:31.687462 | orchestrator | 19:05:31.687 STDOUT terraform:  + power_state = "active" 2025-05-13 19:05:31.687507 | orchestrator | 19:05:31.687 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.687551 | orchestrator | 19:05:31.687 STDOUT terraform:  + security_groups = (known after apply) 2025-05-13 19:05:31.687575 | orchestrator | 19:05:31.687 STDOUT terraform:  + stop_before_destroy = false 2025-05-13 19:05:31.687624 | orchestrator | 19:05:31.687 STDOUT terraform:  + updated = (known after apply) 2025-05-13 19:05:31.687687 | orchestrator | 19:05:31.687 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-13 19:05:31.687734 | orchestrator | 19:05:31.687 STDOUT terraform:  + block_device { 2025-05-13 19:05:31.687766 | orchestrator | 19:05:31.687 STDOUT terraform:  + boot_index = 0 2025-05-13 19:05:31.687802 | orchestrator | 19:05:31.687 STDOUT terraform:  + delete_on_termination = false 2025-05-13 19:05:31.687848 | orchestrator | 19:05:31.687 STDOUT terraform:  + destination_type = "volume" 2025-05-13 19:05:31.687900 | orchestrator | 19:05:31.687 STDOUT terraform:  + multiattach = false 2025-05-13 19:05:31.687955 | orchestrator | 19:05:31.687 STDOUT terraform:  + source_type = "volume" 2025-05-13 19:05:31.688034 | orchestrator | 19:05:31.687 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 19:05:31.688058 | orchestrator | 19:05:31.688 STDOUT terraform:  } 2025-05-13 19:05:31.688081 | orchestrator | 19:05:31.688 STDOUT terraform:  + network { 2025-05-13 19:05:31.688118 | orchestrator | 19:05:31.688 STDOUT terraform:  + access_network = false 2025-05-13 19:05:31.688157 | orchestrator | 19:05:31.688 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-13 19:05:31.688196 | orchestrator | 19:05:31.688 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-13 19:05:31.688235 | orchestrator | 19:05:31.688 STDOUT terraform:  + mac = (known after apply) 2025-05-13 19:05:31.688273 | orchestrator | 19:05:31.688 STDOUT terraform:  + name = (known after apply) 2025-05-13 19:05:31.688311 | orchestrator | 19:05:31.688 STDOUT terraform:  + port = (known after apply) 2025-05-13 19:05:31.688349 | orchestrator | 19:05:31.688 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 19:05:31.688358 | orchestrator | 19:05:31.688 STDOUT terraform:  } 2025-05-13 19:05:31.688366 | orchestrator | 19:05:31.688 STDOUT terraform:  } 2025-05-13 19:05:31.688427 | orchestrator | 19:05:31.688 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-05-13 19:05:31.688478 | orchestrator | 19:05:31.688 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-13 19:05:31.688519 | orchestrator | 19:05:31.688 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-13 19:05:31.688560 | orchestrator | 19:05:31.688 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-13 19:05:31.688605 | orchestrator | 19:05:31.688 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-13 19:05:31.688648 | orchestrator | 19:05:31.688 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 19:05:31.688678 | orchestrator | 19:05:31.688 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 19:05:31.688721 | orchestrator | 19:05:31.688 STDOUT terraform:  + config_drive = true 2025-05-13 19:05:31.688863 | orchestrator | 19:05:31.688 STDOUT terraform:  + created = (known after apply) 2025-05-13 19:05:31.688916 | orchestrator | 19:05:31.688 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-13 19:05:31.688929 | orchestrator | 19:05:31.688 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-13 19:05:31.688946 | orchestrator | 19:05:31.688 STDOUT terraform:  + force_delete = false 2025-05-13 19:05:31.688957 | orchestrator | 19:05:31.688 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.688967 | orchestrator | 19:05:31.688 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 19:05:31.688980 | orchestrator | 19:05:31.688 STDOUT terraform:  + image_name = (known after apply) 2025-05-13 19:05:31.689002 | orchestrator | 19:05:31.688 STDOUT terraform:  + key_pair = "testbed" 2025-05-13 19:05:31.689015 | orchestrator | 19:05:31.688 STDOUT terraform:  + name = "testbed-node-3" 2025-05-13 19:05:31.689055 | orchestrator | 19:05:31.689 STDOUT terraform:  + power_state = "active" 2025-05-13 19:05:31.689091 | orchestrator | 19:05:31.689 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.689127 | orchestrator | 19:05:31.689 STDOUT terraform:  + security_groups = (known after apply) 2025-05-13 19:05:31.689141 | orchestrator | 19:05:31.689 STDOUT terraform:  + stop_before_destroy = false 2025-05-13 19:05:31.689198 | orchestrator | 19:05:31.689 STDOUT terraform:  + updated = (known after apply) 2025-05-13 19:05:31.689265 | orchestrator | 19:05:31.689 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-13 19:05:31.689288 | orchestrator | 19:05:31.689 STDOUT terraform:  + block_device { 2025-05-13 19:05:31.689303 | orchestrator | 19:05:31.689 STDOUT terraform:  + boot_index = 0 2025-05-13 19:05:31.689318 | orchestrator | 19:05:31.689 STDOUT terraform:  + delete_on_termination = false 2025-05-13 19:05:31.689357 | orchestrator | 19:05:31.689 STDOUT terraform:  + destination_type = "volume" 2025-05-13 19:05:31.689373 | orchestrator | 19:05:31.689 STDOUT terraform:  + multiattach = false 2025-05-13 19:05:31.689426 | orchestrator | 19:05:31.689 STDOUT terraform:  + source_type = "volume" 2025-05-13 19:05:31.689466 | orchestrator | 19:05:31.689 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 19:05:31.689478 | orchestrator | 19:05:31.689 STDOUT terraform:  } 2025-05-13 19:05:31.689493 | orchestrator | 19:05:31.689 STDOUT terraform:  + network { 2025-05-13 19:05:31.689507 | orchestrator | 19:05:31.689 STDOUT terraform:  + access_network = false 2025-05-13 19:05:31.689545 | orchestrator | 19:05:31.689 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-13 19:05:31.689561 | orchestrator | 19:05:31.689 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-13 19:05:31.689615 | orchestrator | 19:05:31.689 STDOUT terraform:  + mac = (known after apply) 2025-05-13 19:05:31.689665 | orchestrator | 19:05:31.689 STDOUT terraform:  + name = (known after apply) 2025-05-13 19:05:31.689681 | orchestrator | 19:05:31.689 STDOUT terraform:  + port = (known after apply) 2025-05-13 19:05:31.689718 | orchestrator | 19:05:31.689 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 19:05:31.689734 | orchestrator | 19:05:31.689 STDOUT terraform:  } 2025-05-13 19:05:31.689749 | orchestrator | 19:05:31.689 STDOUT terraform:  } 2025-05-13 19:05:31.689807 | orchestrator | 19:05:31.689 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-05-13 19:05:31.689858 | orchestrator | 19:05:31.689 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-13 19:05:31.689897 | orchestrator | 19:05:31.689 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-13 19:05:31.689936 | orchestrator | 19:05:31.689 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-13 19:05:31.689972 | orchestrator | 19:05:31.689 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-13 19:05:31.690041 | orchestrator | 19:05:31.689 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 19:05:31.690061 | orchestrator | 19:05:31.689 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 19:05:31.690073 | orchestrator | 19:05:31.690 STDOUT terraform:  + config_drive = true 2025-05-13 19:05:31.690114 | orchestrator | 19:05:31.690 STDOUT terraform:  + created = (known after apply) 2025-05-13 19:05:31.690156 | orchestrator | 19:05:31.690 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-13 19:05:31.690194 | orchestrator | 19:05:31.690 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-13 19:05:31.690210 | orchestrator | 19:05:31.690 STDOUT terraform:  + force_delete = false 2025-05-13 19:05:31.690264 | orchestrator | 19:05:31.690 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.690306 | orchestrator | 19:05:31.690 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 19:05:31.690350 | orchestrator | 19:05:31.690 STDOUT terraform:  + image_name = (known after apply) 2025-05-13 19:05:31.690366 | orchestrator | 19:05:31.690 STDOUT terraform:  + key_pair = "testbed" 2025-05-13 19:05:31.690414 | orchestrator | 19:05:31.690 STDOUT terraform:  + name = "testbed-node-4" 2025-05-13 19:05:31.690430 | orchestrator | 19:05:31.690 STDOUT terraform:  + power_state = "active" 2025-05-13 19:05:31.690481 | orchestrator | 19:05:31.690 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.690524 | orchestrator | 19:05:31.690 STDOUT terraform:  + security_groups = (known after apply) 2025-05-13 19:05:31.690540 | orchestrator | 19:05:31.690 STDOUT terraform:  + stop_before_destroy = false 2025-05-13 19:05:31.690588 | orchestrator | 19:05:31.690 STDOUT terraform:  + updated = (known after apply) 2025-05-13 19:05:31.690648 | orchestrator | 19:05:31.690 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-13 19:05:31.690664 | orchestrator | 19:05:31.690 STDOUT terraform:  + block_device { 2025-05-13 19:05:31.690678 | orchestrator | 19:05:31.690 STDOUT terraform:  + boot_index = 0 2025-05-13 19:05:31.690738 | orchestrator | 19:05:31.690 STDOUT terraform:  + delete_on_termination = false 2025-05-13 19:05:31.690777 | orchestrator | 19:05:31.690 STDOUT terraform:  + destination_type = "volume" 2025-05-13 19:05:31.690793 | orchestrator | 19:05:31.690 STDOUT terraform:  + multiattach = false 2025-05-13 19:05:31.690839 | orchestrator | 19:05:31.690 STDOUT terraform:  + source_type = "volume" 2025-05-13 19:05:31.690886 | orchestrator | 19:05:31.690 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 19:05:31.690902 | orchestrator | 19:05:31.690 STDOUT terraform:  } 2025-05-13 19:05:31.690977 | orchestrator | 19:05:31.690 STDOUT terraform:  + network { 2025-05-13 19:05:31.690995 | orchestrator | 19:05:31.690 STDOUT terraform:  + access_network = false 2025-05-13 19:05:31.691037 | orchestrator | 19:05:31.690 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-13 19:05:31.691064 | orchestrator | 19:05:31.691 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-13 19:05:31.691108 | orchestrator | 19:05:31.691 STDOUT terraform:  + mac = (known after apply) 2025-05-13 19:05:31.691147 | orchestrator | 19:05:31.691 STDOUT terraform:  + name = (known after apply) 2025-05-13 19:05:31.691234 | orchestrator | 19:05:31.691 STDOUT terraform:  + port = (known after apply) 2025-05-13 19:05:31.691247 | orchestrator | 19:05:31.691 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 19:05:31.691258 | orchestrator | 19:05:31.691 STDOUT terraform:  } 2025-05-13 19:05:31.691270 | orchestrator | 19:05:31.691 STDOUT terraform:  } 2025-05-13 19:05:31.691284 | orchestrator | 19:05:31.691 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-05-13 19:05:31.691341 | orchestrator | 19:05:31.691 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-13 19:05:31.691382 | orchestrator | 19:05:31.691 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-13 19:05:31.691399 | orchestrator | 19:05:31.691 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-13 19:05:31.691457 | orchestrator | 19:05:31.691 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-13 19:05:31.691498 | orchestrator | 19:05:31.691 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 19:05:31.691514 | orchestrator | 19:05:31.691 STDOUT terraform:  + availability_zone = "nova" 2025-05-13 19:05:31.691536 | orchestrator | 19:05:31.691 STDOUT terraform:  + config_drive = true 2025-05-13 19:05:31.691587 | orchestrator | 19:05:31.691 STDOUT terraform:  + created = (known after apply) 2025-05-13 19:05:31.691638 | orchestrator | 19:05:31.691 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-13 19:05:31.691655 | orchestrator | 19:05:31.691 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-13 19:05:31.691670 | orchestrator | 19:05:31.691 STDOUT terraform:  + force_delete = false 2025-05-13 19:05:31.691845 | orchestrator | 19:05:31.691 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.691891 | orchestrator | 19:05:31.691 STDOUT terraform:  + image_id = (known after apply) 2025-05-13 19:05:31.691906 | orchestrator | 19:05:31.691 STDOUT terraform:  + image_name = (known after apply) 2025-05-13 19:05:31.691913 | orchestrator | 19:05:31.691 STDOUT terraform:  + key_pair = "testbed" 2025-05-13 19:05:31.691918 | orchestrator | 19:05:31.691 STDOUT terraform:  + name = "testbed-node-5" 2025-05-13 19:05:31.691924 | orchestrator | 19:05:31.691 STDOUT terraform:  + power_state = "active" 2025-05-13 19:05:31.691953 | orchestrator | 19:05:31.691 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.691992 | orchestrator | 19:05:31.691 STDOUT terraform:  + security_groups = (known after apply) 2025-05-13 19:05:31.692015 | orchestrator | 19:05:31.691 STDOUT terraform:  + stop_before_destroy = false 2025-05-13 19:05:31.692059 | orchestrator | 19:05:31.692 STDOUT terraform:  + updated = (known after apply) 2025-05-13 19:05:31.692119 | orchestrator | 19:05:31.692 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-13 19:05:31.692141 | orchestrator | 19:05:31.692 STDOUT terraform:  + block_device { 2025-05-13 19:05:31.692165 | orchestrator | 19:05:31.692 STDOUT terraform:  + boot_index = 0 2025-05-13 19:05:31.692189 | orchestrator | 19:05:31.692 STDOUT terraform:  + delete_on_termination = false 2025-05-13 19:05:31.692222 | orchestrator | 19:05:31.692 STDOUT terraform:  + destination_type = "volume" 2025-05-13 19:05:31.692254 | orchestrator | 19:05:31.692 STDOUT terraform:  + multiattach = false 2025-05-13 19:05:31.692287 | orchestrator | 19:05:31.692 STDOUT terraform:  + source_type = "volume" 2025-05-13 19:05:31.692328 | orchestrator | 19:05:31.692 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 19:05:31.692337 | orchestrator | 19:05:31.692 STDOUT terraform:  } 2025-05-13 19:05:31.692345 | orchestrator | 19:05:31.692 STDOUT terraform:  + network { 2025-05-13 19:05:31.692374 | orchestrator | 19:05:31.692 STDOUT terraform:  + access_network = false 2025-05-13 19:05:31.692407 | orchestrator | 19:05:31.692 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-13 19:05:31.692441 | orchestrator | 19:05:31.692 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-13 19:05:31.692477 | orchestrator | 19:05:31.692 STDOUT terraform:  + mac = (known after apply) 2025-05-13 19:05:31.692512 | orchestrator | 19:05:31.692 STDOUT terraform:  + name = (known after apply) 2025-05-13 19:05:31.692546 | orchestrator | 19:05:31.692 STDOUT terraform:  + port = (known after apply) 2025-05-13 19:05:31.692582 | orchestrator | 19:05:31.692 STDOUT terraform:  + uuid = (known after apply) 2025-05-13 19:05:31.692591 | orchestrator | 19:05:31.692 STDOUT terraform:  } 2025-05-13 19:05:31.692605 | orchestrator | 19:05:31.692 STDOUT terraform:  } 2025-05-13 19:05:31.692642 | orchestrator | 19:05:31.692 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-05-13 19:05:31.692683 | orchestrator | 19:05:31.692 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-05-13 19:05:31.692717 | orchestrator | 19:05:31.692 STDOUT terraform:  + fingerprint = (known after apply) 2025-05-13 19:05:31.692758 | orchestrator | 19:05:31.692 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.692772 | orchestrator | 19:05:31.692 STDOUT terraform:  + name = "testbed" 2025-05-13 19:05:31.692806 | orchestrator | 19:05:31.692 STDOUT terraform:  + private_key = (sensitive value) 2025-05-13 19:05:31.692836 | orchestrator | 19:05:31.692 STDOUT terraform:  + public_key = (known after apply) 2025-05-13 19:05:31.692859 | orchestrator | 19:05:31.692 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.692892 | orchestrator | 19:05:31.692 STDOUT terraform:  + user_id = (known after apply) 2025-05-13 19:05:31.692900 | orchestrator | 19:05:31.692 STDOUT terraform:  } 2025-05-13 19:05:31.692956 | orchestrator | 19:05:31.692 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-05-13 19:05:31.693010 | orchestrator | 19:05:31.692 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-13 19:05:31.693047 | orchestrator | 19:05:31.692 STDOUT terraform:  + device = (known after apply) 2025-05-13 19:05:31.693070 | orchestrator | 19:05:31.693 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.693101 | orchestrator | 19:05:31.693 STDOUT terraform:  + instance_id = (known after apply) 2025-05-13 19:05:31.693132 | orchestrator | 19:05:31.693 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.693163 | orchestrator | 19:05:31.693 STDOUT terraform:  + volume_id = (known after apply) 2025-05-13 19:05:31.693172 | orchestrator | 19:05:31.693 STDOUT terraform:  } 2025-05-13 19:05:31.693227 | orchestrator | 19:05:31.693 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-05-13 19:05:31.693280 | orchestrator | 19:05:31.693 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-13 19:05:31.693312 | orchestrator | 19:05:31.693 STDOUT terraform:  + device = (known after apply) 2025-05-13 19:05:31.693343 | orchestrator | 19:05:31.693 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.693375 | orchestrator | 19:05:31.693 STDOUT terraform:  + instance_id = (known after apply) 2025-05-13 19:05:31.693408 | orchestrator | 19:05:31.693 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.693439 | orchestrator | 19:05:31.693 STDOUT terraform:  + volume_id = (known after apply) 2025-05-13 19:05:31.693448 | orchestrator | 19:05:31.693 STDOUT terraform:  } 2025-05-13 19:05:31.693504 | orchestrator | 19:05:31.693 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-05-13 19:05:31.693557 | orchestrator | 19:05:31.693 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-13 19:05:31.693589 | orchestrator | 19:05:31.693 STDOUT terraform:  + device = (known after apply) 2025-05-13 19:05:31.693620 | orchestrator | 19:05:31.693 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.693651 | orchestrator | 19:05:31.693 STDOUT terraform:  + instance_id = (known after apply) 2025-05-13 19:05:31.693682 | orchestrator | 19:05:31.693 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.693725 | orchestrator | 19:05:31.693 STDOUT terraform:  + volume_id = (known after apply) 2025-05-13 19:05:31.693734 | orchestrator | 19:05:31.693 STDOUT terraform:  } 2025-05-13 19:05:31.693790 | orchestrator | 19:05:31.693 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-05-13 19:05:31.693844 | orchestrator | 19:05:31.693 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-13 19:05:31.693877 | orchestrator | 19:05:31.693 STDOUT terraform:  + device = (known after apply) 2025-05-13 19:05:31.693909 | orchestrator | 19:05:31.693 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.693940 | orchestrator | 19:05:31.693 STDOUT terraform:  + instance_id = (known after apply) 2025-05-13 19:05:31.693971 | orchestrator | 19:05:31.693 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.694007 | orchestrator | 19:05:31.693 STDOUT terraform:  + volume_id = (known after apply) 2025-05-13 19:05:31.694037 | orchestrator | 19:05:31.693 STDOUT terraform:  } 2025-05-13 19:05:31.694081 | orchestrator | 19:05:31.694 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-05-13 19:05:31.694135 | orchestrator | 19:05:31.694 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-13 19:05:31.694166 | orchestrator | 19:05:31.694 STDOUT terraform:  + device = (known after apply) 2025-05-13 19:05:31.694198 | orchestrator | 19:05:31.694 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.694229 | orchestrator | 19:05:31.694 STDOUT terraform:  + instance_id = (known after apply) 2025-05-13 19:05:31.694261 | orchestrator | 19:05:31.694 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.694292 | orchestrator | 19:05:31.694 STDOUT terraform:  + volume_id = (known after apply) 2025-05-13 19:05:31.694301 | orchestrator | 19:05:31.694 STDOUT terraform:  } 2025-05-13 19:05:31.694369 | orchestrator | 19:05:31.694 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-05-13 19:05:31.694423 | orchestrator | 19:05:31.694 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-13 19:05:31.694453 | orchestrator | 19:05:31.694 STDOUT terraform:  + device = (known after apply) 2025-05-13 19:05:31.694486 | orchestrator | 19:05:31.694 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.694517 | orchestrator | 19:05:31.694 STDOUT terraform:  + instance_id = (known after apply) 2025-05-13 19:05:31.694549 | orchestrator | 19:05:31.694 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.694581 | orchestrator | 19:05:31.694 STDOUT terraform:  + volume_id = (known after apply) 2025-05-13 19:05:31.694590 | orchestrator | 19:05:31.694 STDOUT terraform:  } 2025-05-13 19:05:31.694645 | orchestrator | 19:05:31.694 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-05-13 19:05:31.694729 | orchestrator | 19:05:31.694 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-13 19:05:31.694755 | orchestrator | 19:05:31.694 STDOUT terraform:  + device = (known after apply) 2025-05-13 19:05:31.694890 | orchestrator | 19:05:31.694 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.694959 | orchestrator | 19:05:31.694 STDOUT terraform:  + instance_id = (known after apply) 2025-05-13 19:05:31.694972 | orchestrator | 19:05:31.694 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.694991 | orchestrator | 19:05:31.694 STDOUT terraform:  + volume_id = (known after apply) 2025-05-13 19:05:31.695002 | orchestrator | 19:05:31.694 STDOUT terraform:  } 2025-05-13 19:05:31.695013 | orchestrator | 19:05:31.694 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-05-13 19:05:31.695024 | orchestrator | 19:05:31.694 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-13 19:05:31.695038 | orchestrator | 19:05:31.694 STDOUT terraform:  + device = (known after apply) 2025-05-13 19:05:31.695051 | orchestrator | 19:05:31.695 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.695083 | orchestrator | 19:05:31.695 STDOUT terraform:  + instance_id = (known after apply) 2025-05-13 19:05:31.695117 | orchestrator | 19:05:31.695 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.695130 | orchestrator | 19:05:31.695 STDOUT terraform:  + volume_id = (known after apply) 2025-05-13 19:05:31.695143 | orchestrator | 19:05:31.695 STDOUT terraform:  } 2025-05-13 19:05:31.695207 | orchestrator | 19:05:31.695 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-05-13 19:05:31.695237 | orchestrator | 19:05:31.695 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-13 19:05:31.695273 | orchestrator | 19:05:31.695 STDOUT terraform:  + device = (known after apply) 2025-05-13 19:05:31.695296 | orchestrator | 19:05:31.695 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.695336 | orchestrator | 19:05:31.695 STDOUT terraform:  + instance_id = (known after apply) 2025-05-13 19:05:31.695370 | orchestrator | 19:05:31.695 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.695384 | orchestrator | 19:05:31.695 STDOUT terraform:  + volume_id = (known after apply) 2025-05-13 19:05:31.695397 | orchestrator | 19:05:31.695 STDOUT terraform:  } 2025-05-13 19:05:31.695461 | orchestrator | 19:05:31.695 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-05-13 19:05:31.695518 | orchestrator | 19:05:31.695 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-05-13 19:05:31.695561 | orchestrator | 19:05:31.695 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-13 19:05:31.695601 | orchestrator | 19:05:31.695 STDOUT terraform:  + floating_ip = (known after apply) 2025-05-13 19:05:31.695635 | orchestrator | 19:05:31.695 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.695649 | orchestrator | 19:05:31.695 STDOUT terraform:  + port_id = (known after apply) 2025-05-13 19:05:31.695686 | orchestrator | 19:05:31.695 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.695724 | orchestrator | 19:05:31.695 STDOUT terraform:  } 2025-05-13 19:05:31.695785 | orchestrator | 19:05:31.695 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-05-13 19:05:31.695835 | orchestrator | 19:05:31.695 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-05-13 19:05:31.695861 | orchestrator | 19:05:31.695 STDOUT terraform:  + address = (known after apply) 2025-05-13 19:05:31.695882 | orchestrator | 19:05:31.695 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 19:05:31.695906 | orchestrator | 19:05:31.695 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-13 19:05:31.695939 | orchestrator | 19:05:31.695 STDOUT terraform:  + dns_name = (known after apply) 2025-05-13 19:05:31.695953 | orchestrator | 19:05:31.695 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-13 19:05:31.695984 | orchestrator | 19:05:31.695 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.695998 | orchestrator | 19:05:31.695 STDOUT terraform:  + pool = "public" 2025-05-13 19:05:31.696021 | orchestrator | 19:05:31.695 STDOUT terraform:  + port_id = (known after apply) 2025-05-13 19:05:31.696045 | orchestrator | 19:05:31.696 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.696058 | orchestrator | 19:05:31.696 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-13 19:05:31.696095 | orchestrator | 19:05:31.696 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 19:05:31.696109 | orchestrator | 19:05:31.696 STDOUT terraform:  } 2025-05-13 19:05:31.696150 | orchestrator | 19:05:31.696 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-05-13 19:05:31.696195 | orchestrator | 19:05:31.696 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-05-13 19:05:31.696232 | orchestrator | 19:05:31.696 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-13 19:05:31.696269 | orchestrator | 19:05:31.696 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 19:05:31.696284 | orchestrator | 19:05:31.696 STDOUT terraform:  + availability_zone_hints = [ 2025-05-13 19:05:31.696296 | orchestrator | 19:05:31.696 STDOUT terraform:  + "nova", 2025-05-13 19:05:31.696309 | orchestrator | 19:05:31.696 STDOUT terraform:  ] 2025-05-13 19:05:31.696348 | orchestrator | 19:05:31.696 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-13 19:05:31.696386 | orchestrator | 19:05:31.696 STDOUT terraform:  + external = (known after apply) 2025-05-13 19:05:31.696424 | orchestrator | 19:05:31.696 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.696463 | orchestrator | 19:05:31.696 STDOUT terraform:  + mtu = (known after apply) 2025-05-13 19:05:31.696502 | orchestrator | 19:05:31.696 STDOUT terraform:  + name = "net-testbed-management" 2025-05-13 19:05:31.696539 | orchestrator | 19:05:31.696 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-13 19:05:31.696575 | orchestrator | 19:05:31.696 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-13 19:05:31.696614 | orchestrator | 19:05:31.696 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.696651 | orchestrator | 19:05:31.696 STDOUT terraform:  + shared = (known after apply) 2025-05-13 19:05:31.696688 | orchestrator | 19:05:31.696 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 19:05:31.696776 | orchestrator | 19:05:31.696 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-05-13 19:05:31.696789 | orchestrator | 19:05:31.696 STDOUT terraform:  + segments (known after apply) 2025-05-13 19:05:31.696802 | orchestrator | 19:05:31.696 STDOUT terraform:  } 2025-05-13 19:05:31.696815 | orchestrator | 19:05:31.696 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-05-13 19:05:31.696876 | orchestrator | 19:05:31.696 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-05-13 19:05:31.696891 | orchestrator | 19:05:31.696 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-13 19:05:31.696942 | orchestrator | 19:05:31.696 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-13 19:05:31.696987 | orchestrator | 19:05:31.696 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-13 19:05:31.697001 | orchestrator | 19:05:31.696 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 19:05:31.697046 | orchestrator | 19:05:31.696 STDOUT terraform:  + device_id = (known after apply) 2025-05-13 19:05:31.697091 | orchestrator | 19:05:31.697 STDOUT terraform:  + device_owner = (known after apply) 2025-05-13 19:05:31.697105 | orchestrator | 19:05:31.697 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-13 19:05:31.697162 | orchestrator | 19:05:31.697 STDOUT terraform:  + dns_name = (known after apply) 2025-05-13 19:05:31.697186 | orchestrator | 19:05:31.697 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.697231 | orchestrator | 19:05:31.697 STDOUT terraform:  + mac_address = (known after apply) 2025-05-13 19:05:31.697245 | orchestrator | 19:05:31.697 STDOUT terraform:  + network_id = (known after apply) 2025-05-13 19:05:31.697287 | orchestrator | 19:05:31.697 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-13 19:05:31.697322 | orchestrator | 19:05:31.697 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-13 19:05:31.697357 | orchestrator | 19:05:31.697 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.697382 | orchestrator | 19:05:31.697 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-13 19:05:31.697427 | orchestrator | 19:05:31.697 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 19:05:31.697442 | orchestrator | 19:05:31.697 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 19:05:31.697455 | orchestrator | 19:05:31.697 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-13 19:05:31.697468 | orchestrator | 19:05:31.697 STDOUT terraform:  } 2025-05-13 19:05:31.697502 | orchestrator | 19:05:31.697 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 19:05:31.697517 | orchestrator | 19:05:31.697 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-13 19:05:31.697529 | orchestrator | 19:05:31.697 STDOUT terraform:  } 2025-05-13 19:05:31.697547 | orchestrator | 19:05:31.697 STDOUT terraform:  + binding (known after apply) 2025-05-13 19:05:31.697560 | orchestrator | 19:05:31.697 STDOUT terraform:  + fixed_ip { 2025-05-13 19:05:31.697572 | orchestrator | 19:05:31.697 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-05-13 19:05:31.697617 | orchestrator | 19:05:31.697 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-13 19:05:31.697629 | orchestrator | 19:05:31.697 STDOUT terraform:  } 2025-05-13 19:05:31.697665 | orchestrator | 19:05:31.697 STDOUT terraform:  } 2025-05-13 19:05:31.697679 | orchestrator | 19:05:31.697 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-05-13 19:05:31.697750 | orchestrator | 19:05:31.697 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-13 19:05:31.697766 | orchestrator | 19:05:31.697 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-13 19:05:31.697801 | orchestrator | 19:05:31.697 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-13 19:05:31.697815 | orchestrator | 19:05:31.697 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-13 19:05:31.697954 | orchestrator | 19:05:31.697 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 19:05:31.698001 | orchestrator | 19:05:31.697 STDOUT terraform:  + device_id = (known after apply) 2025-05-13 19:05:31.698009 | orchestrator | 19:05:31.697 STDOUT terraform:  + device_owner = (known after apply) 2025-05-13 19:05:31.698054 | orchestrator | 19:05:31.697 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-13 19:05:31.698066 | orchestrator | 19:05:31.697 STDOUT terraform:  + dns_name = (known after apply) 2025-05-13 19:05:31.698076 | orchestrator | 19:05:31.697 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.698090 | orchestrator | 19:05:31.698 STDOUT terraform:  + mac_address = (known after apply) 2025-05-13 19:05:31.698119 | orchestrator | 19:05:31.698 STDOUT terraform:  + network_id = (known after apply) 2025-05-13 19:05:31.698153 | orchestrator | 19:05:31.698 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-13 19:05:31.698190 | orchestrator | 19:05:31.698 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-13 19:05:31.698226 | orchestrator | 19:05:31.698 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.698262 | orchestrator | 19:05:31.698 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-13 19:05:31.698299 | orchestrator | 19:05:31.698 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 19:05:31.698310 | orchestrator | 19:05:31.698 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 19:05:31.698345 | orchestrator | 19:05:31.698 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-13 19:05:31.698354 | orchestrator | 19:05:31.698 STDOUT terraform:  } 2025-05-13 19:05:31.698377 | orchestrator | 19:05:31.698 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 19:05:31.698407 | orchestrator | 19:05:31.698 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-13 19:05:31.698416 | orchestrator | 19:05:31.698 STDOUT terraform:  } 2025-05-13 19:05:31.698436 | orchestrator | 19:05:31.698 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 19:05:31.698466 | orchestrator | 19:05:31.698 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-13 19:05:31.698475 | orchestrator | 19:05:31.698 STDOUT terraform:  } 2025-05-13 19:05:31.698497 | orchestrator | 19:05:31.698 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 19:05:31.698526 | orchestrator | 19:05:31.698 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-13 19:05:31.698535 | orchestrator | 19:05:31.698 STDOUT terraform:  } 2025-05-13 19:05:31.698560 | orchestrator | 19:05:31.698 STDOUT terraform:  + binding (known after apply) 2025-05-13 19:05:31.698569 | orchestrator | 19:05:31.698 STDOUT terraform:  + fixed_ip { 2025-05-13 19:05:31.698598 | orchestrator | 19:05:31.698 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-05-13 19:05:31.698627 | orchestrator | 19:05:31.698 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-13 19:05:31.698636 | orchestrator | 19:05:31.698 STDOUT terraform:  } 2025-05-13 19:05:31.698644 | orchestrator | 19:05:31.698 STDOUT terraform:  } 2025-05-13 19:05:31.698709 | orchestrator | 19:05:31.698 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-05-13 19:05:31.698769 | orchestrator | 19:05:31.698 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-13 19:05:31.698805 | orchestrator | 19:05:31.698 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-13 19:05:31.698843 | orchestrator | 19:05:31.698 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-13 19:05:31.698878 | orchestrator | 19:05:31.698 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-13 19:05:31.698916 | orchestrator | 19:05:31.698 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 19:05:31.698952 | orchestrator | 19:05:31.698 STDOUT terraform:  + device_id = (known after apply) 2025-05-13 19:05:31.698989 | orchestrator | 19:05:31.698 STDOUT terraform:  + device_owner = (known after apply) 2025-05-13 19:05:31.699030 | orchestrator | 19:05:31.698 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-13 19:05:31.699067 | orchestrator | 19:05:31.699 STDOUT terraform:  + dns_name = (known after apply) 2025-05-13 19:05:31.699101 | orchestrator | 19:05:31.699 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.699138 | orchestrator | 19:05:31.699 STDOUT terraform:  + mac_address = (known after apply) 2025-05-13 19:05:31.699176 | orchestrator | 19:05:31.699 STDOUT terraform:  + network_id = (known after apply) 2025-05-13 19:05:31.699216 | orchestrator | 19:05:31.699 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-13 19:05:31.699250 | orchestrator | 19:05:31.699 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-13 19:05:31.699287 | orchestrator | 19:05:31.699 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.699323 | orchestrator | 19:05:31.699 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-13 19:05:31.699359 | orchestrator | 19:05:31.699 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 19:05:31.699374 | orchestrator | 19:05:31.699 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 19:05:31.699406 | orchestrator | 19:05:31.699 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-13 19:05:31.699415 | orchestrator | 19:05:31.699 STDOUT terraform:  } 2025-05-13 19:05:31.699437 | orchestrator | 19:05:31.699 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 19:05:31.699467 | orchestrator | 19:05:31.699 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-13 19:05:31.699476 | orchestrator | 19:05:31.699 STDOUT terraform:  } 2025-05-13 19:05:31.699498 | orchestrator | 19:05:31.699 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 19:05:31.699534 | orchestrator | 19:05:31.699 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-13 19:05:31.699543 | orchestrator | 19:05:31.699 STDOUT terraform:  } 2025-05-13 19:05:31.699550 | orchestrator | 19:05:31.699 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 19:05:31.699583 | orchestrator | 19:05:31.699 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-13 19:05:31.699592 | orchestrator | 19:05:31.699 STDOUT terraform:  } 2025-05-13 19:05:31.699617 | orchestrator | 19:05:31.699 STDOUT terraform:  + binding (known after apply) 2025-05-13 19:05:31.699632 | orchestrator | 19:05:31.699 STDOUT terraform:  + fixed_ip { 2025-05-13 19:05:31.699656 | orchestrator | 19:05:31.699 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-05-13 19:05:31.699686 | orchestrator | 19:05:31.699 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-13 19:05:31.699724 | orchestrator | 19:05:31.699 STDOUT terraform:  } 2025-05-13 19:05:31.699731 | orchestrator | 19:05:31.699 STDOUT terraform:  } 2025-05-13 19:05:31.699769 | orchestrator | 19:05:31.699 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-05-13 19:05:31.699810 | orchestrator | 19:05:31.699 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-13 19:05:31.699846 | orchestrator | 19:05:31.699 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-13 19:05:31.699882 | orchestrator | 19:05:31.699 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-13 19:05:31.699919 | orchestrator | 19:05:31.699 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-13 19:05:31.699956 | orchestrator | 19:05:31.699 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 19:05:31.699992 | orchestrator | 19:05:31.699 STDOUT terraform:  + device_id = (known after apply) 2025-05-13 19:05:31.700028 | orchestrator | 19:05:31.699 STDOUT terraform:  + device_owner = (known after apply) 2025-05-13 19:05:31.700063 | orchestrator | 19:05:31.700 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-13 19:05:31.700100 | orchestrator | 19:05:31.700 STDOUT terraform:  + dns_name = (known after apply) 2025-05-13 19:05:31.700137 | orchestrator | 19:05:31.700 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.700173 | orchestrator | 19:05:31.700 STDOUT terraform:  + mac_address = (known after apply) 2025-05-13 19:05:31.700210 | orchestrator | 19:05:31.700 STDOUT terraform:  + network_id = (known after apply) 2025-05-13 19:05:31.700246 | orchestrator | 19:05:31.700 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-13 19:05:31.700281 | orchestrator | 19:05:31.700 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-13 19:05:31.700317 | orchestrator | 19:05:31.700 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.700353 | orchestrator | 19:05:31.700 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-13 19:05:31.700389 | orchestrator | 19:05:31.700 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 19:05:31.700404 | orchestrator | 19:05:31.700 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 19:05:31.700435 | orchestrator | 19:05:31.700 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-13 19:05:31.700444 | orchestrator | 19:05:31.700 STDOUT terraform:  } 2025-05-13 19:05:31.700466 | orchestrator | 19:05:31.700 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 19:05:31.700496 | orchestrator | 19:05:31.700 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-13 19:05:31.700510 | orchestrator | 19:05:31.700 STDOUT terraform:  } 2025-05-13 19:05:31.700529 | orchestrator | 19:05:31.700 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 19:05:31.700542 | orchestrator | 19:05:31.700 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-13 19:05:31.700554 | orchestrator | 19:05:31.700 STDOUT terraform:  } 2025-05-13 19:05:31.700587 | orchestrator | 19:05:31.700 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 19:05:31.700601 | orchestrator | 19:05:31.700 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-13 19:05:31.700614 | orchestrator | 19:05:31.700 STDOUT terraform:  } 2025-05-13 19:05:31.700627 | orchestrator | 19:05:31.700 STDOUT terraform:  + binding (known after apply) 2025-05-13 19:05:31.700640 | orchestrator | 19:05:31.700 STDOUT terraform:  + fixed_ip { 2025-05-13 19:05:31.700671 | orchestrator | 19:05:31.700 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-05-13 19:05:31.700707 | orchestrator | 19:05:31.700 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-13 19:05:31.700721 | orchestrator | 19:05:31.700 STDOUT terraform:  } 2025-05-13 19:05:31.700733 | orchestrator | 19:05:31.700 STDOUT terraform:  } 2025-05-13 19:05:31.700791 | orchestrator | 19:05:31.700 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-05-13 19:05:31.700841 | orchestrator | 19:05:31.700 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-13 19:05:31.700886 | orchestrator | 19:05:31.700 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-13 19:05:31.700902 | orchestrator | 19:05:31.700 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-13 19:05:31.700966 | orchestrator | 19:05:31.700 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-13 19:05:31.700984 | orchestrator | 19:05:31.700 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 19:05:31.701008 | orchestrator | 19:05:31.700 STDOUT terraform:  + device_id = (known after apply) 2025-05-13 19:05:31.701049 | orchestrator | 19:05:31.700 STDOUT terraform:  + device_owner = (known after apply) 2025-05-13 19:05:31.701083 | orchestrator | 19:05:31.701 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-13 19:05:31.701124 | orchestrator | 19:05:31.701 STDOUT terraform:  + dns_name = (known after apply) 2025-05-13 19:05:31.701166 | orchestrator | 19:05:31.701 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.701198 | orchestrator | 19:05:31.701 STDOUT terraform:  + mac_address = (known after apply) 2025-05-13 19:05:31.701233 | orchestrator | 19:05:31.701 STDOUT terraform:  + network_id = (known after apply) 2025-05-13 19:05:31.701292 | orchestrator | 19:05:31.701 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-13 19:05:31.701300 | orchestrator | 19:05:31.701 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-13 19:05:31.701338 | orchestrator | 19:05:31.701 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.701375 | orchestrator | 19:05:31.701 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-13 19:05:31.701413 | orchestrator | 19:05:31.701 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 19:05:31.701435 | orchestrator | 19:05:31.701 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 19:05:31.701447 | orchestrator | 19:05:31.701 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-13 19:05:31.701459 | orchestrator | 19:05:31.701 STDOUT terraform:  } 2025-05-13 19:05:31.701470 | orchestrator | 19:05:31.701 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 19:05:31.701505 | orchestrator | 19:05:31.701 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-13 19:05:31.701518 | orchestrator | 19:05:31.701 STDOUT terraform:  } 2025-05-13 19:05:31.701526 | orchestrator | 19:05:31.701 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 19:05:31.701560 | orchestrator | 19:05:31.701 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-13 19:05:31.701570 | orchestrator | 19:05:31.701 STDOUT terraform:  } 2025-05-13 19:05:31.701578 | orchestrator | 19:05:31.701 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 19:05:31.701621 | orchestrator | 19:05:31.701 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-13 19:05:31.701635 | orchestrator | 19:05:31.701 STDOUT terraform:  } 2025-05-13 19:05:31.701648 | orchestrator | 19:05:31.701 STDOUT terraform:  + binding (known after apply) 2025-05-13 19:05:31.701656 | orchestrator | 19:05:31.701 STDOUT terraform:  + fixed_ip { 2025-05-13 19:05:31.701687 | orchestrator | 19:05:31.701 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-05-13 19:05:31.701740 | orchestrator | 19:05:31.701 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-13 19:05:31.701748 | orchestrator | 19:05:31.701 STDOUT terraform:  } 2025-05-13 19:05:31.701756 | orchestrator | 19:05:31.701 STDOUT terraform:  } 2025-05-13 19:05:31.701877 | orchestrator | 19:05:31.701 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-05-13 19:05:31.701910 | orchestrator | 19:05:31.701 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-13 19:05:31.701926 | orchestrator | 19:05:31.701 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-13 19:05:31.701936 | orchestrator | 19:05:31.701 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-13 19:05:31.701949 | orchestrator | 19:05:31.701 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-13 19:05:31.701973 | orchestrator | 19:05:31.701 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 19:05:31.702040 | orchestrator | 19:05:31.701 STDOUT terraform:  + device_id = (known after apply) 2025-05-13 19:05:31.702060 | orchestrator | 19:05:31.702 STDOUT terraform:  + device_owner = (known after apply) 2025-05-13 19:05:31.702109 | orchestrator | 19:05:31.702 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-13 19:05:31.702131 | orchestrator | 19:05:31.702 STDOUT terraform:  + dns_name = (known after apply) 2025-05-13 19:05:31.702164 | orchestrator | 19:05:31.702 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.702196 | orchestrator | 19:05:31.702 STDOUT terraform:  + mac_address = (known after apply) 2025-05-13 19:05:31.702228 | orchestrator | 19:05:31.702 STDOUT terraform:  + network_id = (known after apply) 2025-05-13 19:05:31.702252 | orchestrator | 19:05:31.702 STDOUT terraform:  + port_security_enab 2025-05-13 19:05:31.702305 | orchestrator | 19:05:31.702 STDOUT terraform: led = (known after apply) 2025-05-13 19:05:31.702338 | orchestrator | 19:05:31.702 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-13 19:05:31.702369 | orchestrator | 19:05:31.702 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.702411 | orchestrator | 19:05:31.702 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-13 19:05:31.702443 | orchestrator | 19:05:31.702 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 19:05:31.702456 | orchestrator | 19:05:31.702 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 19:05:31.702487 | orchestrator | 19:05:31.702 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-13 19:05:31.702497 | orchestrator | 19:05:31.702 STDOUT terraform:  } 2025-05-13 19:05:31.702509 | orchestrator | 19:05:31.702 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 19:05:31.702540 | orchestrator | 19:05:31.702 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-13 19:05:31.702550 | orchestrator | 19:05:31.702 STDOUT terraform:  } 2025-05-13 19:05:31.702570 | orchestrator | 19:05:31.702 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 19:05:31.702582 | orchestrator | 19:05:31.702 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-13 19:05:31.702593 | orchestrator | 19:05:31.702 STDOUT terraform:  } 2025-05-13 19:05:31.702604 | orchestrator | 19:05:31.702 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 19:05:31.702644 | orchestrator | 19:05:31.702 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-13 19:05:31.702654 | orchestrator | 19:05:31.702 STDOUT terraform:  } 2025-05-13 19:05:31.702666 | orchestrator | 19:05:31.702 STDOUT terraform:  + binding (known after apply) 2025-05-13 19:05:31.702677 | orchestrator | 19:05:31.702 STDOUT terraform:  + fixed_ip { 2025-05-13 19:05:31.702733 | orchestrator | 19:05:31.702 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-05-13 19:05:31.702748 | orchestrator | 19:05:31.702 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-13 19:05:31.702756 | orchestrator | 19:05:31.702 STDOUT terraform:  } 2025-05-13 19:05:31.702768 | orchestrator | 19:05:31.702 STDOUT terraform:  } 2025-05-13 19:05:31.702812 | orchestrator | 19:05:31.702 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-05-13 19:05:31.702857 | orchestrator | 19:05:31.702 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-13 19:05:31.702890 | orchestrator | 19:05:31.702 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-13 19:05:31.702923 | orchestrator | 19:05:31.702 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-13 19:05:31.702955 | orchestrator | 19:05:31.702 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-13 19:05:31.702987 | orchestrator | 19:05:31.702 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 19:05:31.703020 | orchestrator | 19:05:31.702 STDOUT terraform:  + device_id = (known after apply) 2025-05-13 19:05:31.703057 | orchestrator | 19:05:31.703 STDOUT terraform:  + device_owner = (known after apply) 2025-05-13 19:05:31.703080 | orchestrator | 19:05:31.703 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-13 19:05:31.703142 | orchestrator | 19:05:31.703 STDOUT terraform:  + dns_name = (known after apply) 2025-05-13 19:05:31.703167 | orchestrator | 19:05:31.703 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.703185 | orchestrator | 19:05:31.703 STDOUT terraform:  + mac_address = (known after apply) 2025-05-13 19:05:31.703219 | orchestrator | 19:05:31.703 STDOUT terraform:  + network_id = (known after apply) 2025-05-13 19:05:31.703251 | orchestrator | 19:05:31.703 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-13 19:05:31.703282 | orchestrator | 19:05:31.703 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-13 19:05:31.703314 | orchestrator | 19:05:31.703 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.703355 | orchestrator | 19:05:31.703 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-13 19:05:31.703387 | orchestrator | 19:05:31.703 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 19:05:31.703399 | orchestrator | 19:05:31.703 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 19:05:31.703419 | orchestrator | 19:05:31.703 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-13 19:05:31.703431 | orchestrator | 19:05:31.703 STDOUT terraform:  } 2025-05-13 19:05:31.703442 | orchestrator | 19:05:31.703 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 19:05:31.703484 | orchestrator | 19:05:31.703 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-13 19:05:31.703495 | orchestrator | 19:05:31.703 STDOUT terraform:  } 2025-05-13 19:05:31.703506 | orchestrator | 19:05:31.703 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 19:05:31.703546 | orchestrator | 19:05:31.703 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-13 19:05:31.703556 | orchestrator | 19:05:31.703 STDOUT terraform:  } 2025-05-13 19:05:31.703568 | orchestrator | 19:05:31.703 STDOUT terraform:  + allowed_address_pairs { 2025-05-13 19:05:31.703580 | orchestrator | 19:05:31.703 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-13 19:05:31.703591 | orchestrator | 19:05:31.703 STDOUT terraform:  } 2025-05-13 19:05:31.703622 | orchestrator | 19:05:31.703 STDOUT terraform:  + binding (known after apply) 2025-05-13 19:05:31.703631 | orchestrator | 19:05:31.703 STDOUT terraform:  + fixed_ip { 2025-05-13 19:05:31.703643 | orchestrator | 19:05:31.703 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-05-13 19:05:31.703683 | orchestrator | 19:05:31.703 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-13 19:05:31.706193 | orchestrator | 19:05:31.704 STDOUT terraform:  } 2025-05-13 19:05:31.706239 | orchestrator | 19:05:31.704 STDOUT terraform:  } 2025-05-13 19:05:31.706245 | orchestrator | 19:05:31.704 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-05-13 19:05:31.706250 | orchestrator | 19:05:31.704 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-05-13 19:05:31.706265 | orchestrator | 19:05:31.704 STDOUT terraform:  + force_destroy = false 2025-05-13 19:05:31.706270 | orchestrator | 19:05:31.704 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.706274 | orchestrator | 19:05:31.704 STDOUT terraform:  + port_id = (known after apply) 2025-05-13 19:05:31.706278 | orchestrator | 19:05:31.704 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.706282 | orchestrator | 19:05:31.704 STDOUT terraform:  + router_id = (known after apply) 2025-05-13 19:05:31.706286 | orchestrator | 19:05:31.704 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-13 19:05:31.706290 | orchestrator | 19:05:31.704 STDOUT terraform:  } 2025-05-13 19:05:31.706297 | orchestrator | 19:05:31.704 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-05-13 19:05:31.706307 | orchestrator | 19:05:31.704 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-05-13 19:05:31.706311 | orchestrator | 19:05:31.704 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-13 19:05:31.706315 | orchestrator | 19:05:31.704 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 19:05:31.706319 | orchestrator | 19:05:31.704 STDOUT terraform:  + availability_zone_hints = [ 2025-05-13 19:05:31.706322 | orchestrator | 19:05:31.704 STDOUT terraform:  + "nova", 2025-05-13 19:05:31.706326 | orchestrator | 19:05:31.704 STDOUT terraform:  ] 2025-05-13 19:05:31.706330 | orchestrator | 19:05:31.704 STDOUT terraform:  + distributed = (known after apply) 2025-05-13 19:05:31.706334 | orchestrator | 19:05:31.704 STDOUT terraform:  + enable_snat = (known after apply) 2025-05-13 19:05:31.706338 | orchestrator | 19:05:31.705 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-05-13 19:05:31.706344 | orchestrator | 19:05:31.705 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.706350 | orchestrator | 19:05:31.705 STDOUT terraform:  + name = "testbed" 2025-05-13 19:05:31.706355 | orchestrator | 19:05:31.705 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.706361 | orchestrator | 19:05:31.705 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 19:05:31.706367 | orchestrator | 19:05:31.705 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-05-13 19:05:31.706373 | orchestrator | 19:05:31.705 STDOUT terraform:  } 2025-05-13 19:05:31.706379 | orchestrator | 19:05:31.705 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-05-13 19:05:31.706386 | orchestrator | 19:05:31.705 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-05-13 19:05:31.706392 | orchestrator | 19:05:31.705 STDOUT terraform:  + description = "ssh" 2025-05-13 19:05:31.706398 | orchestrator | 19:05:31.705 STDOUT terraform:  + direction = "ingress" 2025-05-13 19:05:31.706403 | orchestrator | 19:05:31.705 STDOUT terraform:  + ethertype = "IPv4" 2025-05-13 19:05:31.706410 | orchestrator | 19:05:31.705 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.706415 | orchestrator | 19:05:31.705 STDOUT terraform:  + port_range_max = 22 2025-05-13 19:05:31.706426 | orchestrator | 19:05:31.705 STDOUT terraform:  + port_range_min = 22 2025-05-13 19:05:31.706433 | orchestrator | 19:05:31.705 STDOUT terraform:  + protocol = "tcp" 2025-05-13 19:05:31.706439 | orchestrator | 19:05:31.705 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.706458 | orchestrator | 19:05:31.705 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-13 19:05:31.706464 | orchestrator | 19:05:31.705 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-13 19:05:31.706468 | orchestrator | 19:05:31.705 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-13 19:05:31.706474 | orchestrator | 19:05:31.705 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 19:05:31.706478 | orchestrator | 19:05:31.705 STDOUT terraform:  } 2025-05-13 19:05:31.706482 | orchestrator | 19:05:31.705 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-05-13 19:05:31.706485 | orchestrator | 19:05:31.705 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-05-13 19:05:31.706489 | orchestrator | 19:05:31.705 STDOUT terraform:  + description = "wireguard" 2025-05-13 19:05:31.706493 | orchestrator | 19:05:31.705 STDOUT terraform:  + direction = "ingress" 2025-05-13 19:05:31.706497 | orchestrator | 19:05:31.705 STDOUT terraform:  + ethertype = "IPv4" 2025-05-13 19:05:31.706500 | orchestrator | 19:05:31.705 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.706504 | orchestrator | 19:05:31.705 STDOUT terraform:  + port_range_max = 51820 2025-05-13 19:05:31.706508 | orchestrator | 19:05:31.705 STDOUT terraform:  + port_range_min = 51820 2025-05-13 19:05:31.706512 | orchestrator | 19:05:31.705 STDOUT terraform:  + protocol = "udp" 2025-05-13 19:05:31.706516 | orchestrator | 19:05:31.705 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.706519 | orchestrator | 19:05:31.705 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-13 19:05:31.706523 | orchestrator | 19:05:31.705 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-13 19:05:31.706527 | orchestrator | 19:05:31.705 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-13 19:05:31.706531 | orchestrator | 19:05:31.705 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 19:05:31.706535 | orchestrator | 19:05:31.705 STDOUT terraform:  } 2025-05-13 19:05:31.706538 | orchestrator | 19:05:31.705 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-05-13 19:05:31.706542 | orchestrator | 19:05:31.705 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-05-13 19:05:31.706546 | orchestrator | 19:05:31.706 STDOUT terraform:  + direction = "ingress" 2025-05-13 19:05:31.706549 | orchestrator | 19:05:31.706 STDOUT terraform:  + ethertype = "IPv4" 2025-05-13 19:05:31.706553 | orchestrator | 19:05:31.706 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.706557 | orchestrator | 19:05:31.706 STDOUT terraform:  + protocol = "tcp" 2025-05-13 19:05:31.706561 | orchestrator | 19:05:31.706 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.706568 | orchestrator | 19:05:31.706 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-13 19:05:31.706572 | orchestrator | 19:05:31.706 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-13 19:05:31.706576 | orchestrator | 19:05:31.706 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-13 19:05:31.706580 | orchestrator | 19:05:31.706 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 19:05:31.706583 | orchestrator | 19:05:31.706 STDOUT terraform:  } 2025-05-13 19:05:31.706587 | orchestrator | 19:05:31.706 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-05-13 19:05:31.706591 | orchestrator | 19:05:31.706 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-05-13 19:05:31.706595 | orchestrator | 19:05:31.706 STDOUT terraform:  + direction = "ingress" 2025-05-13 19:05:31.706598 | orchestrator | 19:05:31.706 STDOUT terraform:  + ethertype = "IPv4" 2025-05-13 19:05:31.706609 | orchestrator | 19:05:31.706 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.706613 | orchestrator | 19:05:31.706 STDOUT terraform:  + protocol = "udp" 2025-05-13 19:05:31.706617 | orchestrator | 19:05:31.706 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.706620 | orchestrator | 19:05:31.706 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-13 19:05:31.706624 | orchestrator | 19:05:31.706 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-13 19:05:31.706628 | orchestrator | 19:05:31.706 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-13 19:05:31.706632 | orchestrator | 19:05:31.706 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 19:05:31.706637 | orchestrator | 19:05:31.706 STDOUT terraform:  } 2025-05-13 19:05:31.707362 | orchestrator | 19:05:31.706 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-05-13 19:05:31.707374 | orchestrator | 19:05:31.706 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-05-13 19:05:31.707379 | orchestrator | 19:05:31.706 STDOUT terraform:  + direction = "ingress" 2025-05-13 19:05:31.707383 | orchestrator | 19:05:31.706 STDOUT terraform:  + ethertype = "IPv4" 2025-05-13 19:05:31.707387 | orchestrator | 19:05:31.706 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.707395 | orchestrator | 19:05:31.706 STDOUT terraform:  + protocol = "icmp" 2025-05-13 19:05:31.707399 | orchestrator | 19:05:31.706 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.707403 | orchestrator | 19:05:31.706 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-13 19:05:31.707407 | orchestrator | 19:05:31.706 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-13 19:05:31.707411 | orchestrator | 19:05:31.706 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-13 19:05:31.707415 | orchestrator | 19:05:31.706 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 19:05:31.707418 | orchestrator | 19:05:31.706 STDOUT terraform:  } 2025-05-13 19:05:31.707431 | orchestrator | 19:05:31.706 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-05-13 19:05:31.707435 | orchestrator | 19:05:31.707 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-05-13 19:05:31.707439 | orchestrator | 19:05:31.707 STDOUT terraform:  + direction = "ingress" 2025-05-13 19:05:31.707443 | orchestrator | 19:05:31.707 STDOUT terraform:  + ethertype = "IPv4" 2025-05-13 19:05:31.707446 | orchestrator | 19:05:31.707 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.707450 | orchestrator | 19:05:31.707 STDOUT terraform:  + protocol = "tcp" 2025-05-13 19:05:31.707455 | orchestrator | 19:05:31.707 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.707461 | orchestrator | 19:05:31.707 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-13 19:05:31.707466 | orchestrator | 19:05:31.707 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-13 19:05:31.707472 | orchestrator | 19:05:31.707 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-13 19:05:31.707478 | orchestrator | 19:05:31.707 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 19:05:31.707484 | orchestrator | 19:05:31.707 STDOUT terraform:  } 2025-05-13 19:05:31.707490 | orchestrator | 19:05:31.707 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-05-13 19:05:31.707499 | orchestrator | 19:05:31.707 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-05-13 19:05:31.707505 | orchestrator | 19:05:31.707 STDOUT terraform:  + direction = "ingress" 2025-05-13 19:05:31.707511 | orchestrator | 19:05:31.707 STDOUT terraform:  + ethertype = "IPv4" 2025-05-13 19:05:31.707516 | orchestrator | 19:05:31.707 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.707522 | orchestrator | 19:05:31.707 STDOUT terraform:  + protocol = "udp" 2025-05-13 19:05:31.707528 | orchestrator | 19:05:31.707 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.707537 | orchestrator | 19:05:31.707 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-13 19:05:31.707543 | orchestrator | 19:05:31.707 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-13 19:05:31.707579 | orchestrator | 19:05:31.707 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-13 19:05:31.707586 | orchestrator | 19:05:31.707 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 19:05:31.707606 | orchestrator | 19:05:31.707 STDOUT terraform:  } 2025-05-13 19:05:31.707674 | orchestrator | 19:05:31.707 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-05-13 19:05:31.707746 | orchestrator | 19:05:31.707 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-05-13 19:05:31.707753 | orchestrator | 19:05:31.707 STDOUT terraform:  + direction = "ingress" 2025-05-13 19:05:31.707758 | orchestrator | 19:05:31.707 STDOUT terraform:  + ethertype = "IPv4" 2025-05-13 19:05:31.707801 | orchestrator | 19:05:31.707 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.707820 | orchestrator | 19:05:31.707 STDOUT terraform:  + protocol = "icmp" 2025-05-13 19:05:31.707835 | orchestrator | 19:05:31.707 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.707873 | orchestrator | 19:05:31.707 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-13 19:05:31.707880 | orchestrator | 19:05:31.707 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-13 19:05:31.707920 | orchestrator | 19:05:31.707 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-13 19:05:31.707950 | orchestrator | 19:05:31.707 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 19:05:31.707956 | orchestrator | 19:05:31.707 STDOUT terraform:  } 2025-05-13 19:05:31.708011 | orchestrator | 19:05:31.707 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-05-13 19:05:31.708065 | orchestrator | 19:05:31.708 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-05-13 19:05:31.708072 | orchestrator | 19:05:31.708 STDOUT terraform:  + description = "vrrp" 2025-05-13 19:05:31.708117 | orchestrator | 19:05:31.708 STDOUT terraform:  + direction = "ingress" 2025-05-13 19:05:31.708153 | orchestrator | 19:05:31.708 STDOUT terraform:  + ethertype = "IPv4" 2025-05-13 19:05:31.708202 | orchestrator | 19:05:31.708 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.708243 | orchestrator | 19:05:31.708 STDOUT terraform:  + protocol = "112" 2025-05-13 19:05:31.708275 | orchestrator | 19:05:31.708 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.708305 | orchestrator | 19:05:31.708 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-13 19:05:31.708330 | orchestrator | 19:05:31.708 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-13 19:05:31.708362 | orchestrator | 19:05:31.708 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-13 19:05:31.708394 | orchestrator | 19:05:31.708 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 19:05:31.708400 | orchestrator | 19:05:31.708 STDOUT terraform:  } 2025-05-13 19:05:31.708453 | orchestrator | 19:05:31.708 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-05-13 19:05:31.708501 | orchestrator | 19:05:31.708 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-05-13 19:05:31.708530 | orchestrator | 19:05:31.708 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 19:05:31.708565 | orchestrator | 19:05:31.708 STDOUT terraform:  + description = "management security group" 2025-05-13 19:05:31.708600 | orchestrator | 19:05:31.708 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.708612 | orchestrator | 19:05:31.708 STDOUT terraform:  + name = "testbed-management" 2025-05-13 19:05:31.708651 | orchestrator | 19:05:31.708 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.708677 | orchestrator | 19:05:31.708 STDOUT terraform:  + stateful = (known after apply) 2025-05-13 19:05:31.708712 | orchestrator | 19:05:31.708 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 19:05:31.708725 | orchestrator | 19:05:31.708 STDOUT terraform:  } 2025-05-13 19:05:31.708769 | orchestrator | 19:05:31.708 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-05-13 19:05:31.708815 | orchestrator | 19:05:31.708 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-05-13 19:05:31.708844 | orchestrator | 19:05:31.708 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 19:05:31.708875 | orchestrator | 19:05:31.708 STDOUT terraform:  + description = "node security group" 2025-05-13 19:05:31.708904 | orchestrator | 19:05:31.708 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.708916 | orchestrator | 19:05:31.708 STDOUT terraform:  + name = "testbed-node" 2025-05-13 19:05:31.708958 | orchestrator | 19:05:31.708 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.709033 | orchestrator | 19:05:31.708 STDOUT terraform:  + stateful = (known after apply) 2025-05-13 19:05:31.709040 | orchestrator | 19:05:31.708 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 19:05:31.709050 | orchestrator | 19:05:31.709 STDOUT terraform:  } 2025-05-13 19:05:31.709089 | orchestrator | 19:05:31.709 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-05-13 19:05:31.709136 | orchestrator | 19:05:31.709 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-05-13 19:05:31.709166 | orchestrator | 19:05:31.709 STDOUT terraform:  + all_tags = (known after apply) 2025-05-13 19:05:31.709196 | orchestrator | 19:05:31.709 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-05-13 19:05:31.709216 | orchestrator | 19:05:31.709 STDOUT terraform:  + dns_nameservers = [ 2025-05-13 19:05:31.709222 | orchestrator | 19:05:31.709 STDOUT terraform:  + "8.8.8.8", 2025-05-13 19:05:31.709247 | orchestrator | 19:05:31.709 STDOUT terraform:  + "9.9.9.9", 2025-05-13 19:05:31.709253 | orchestrator | 19:05:31.709 STDOUT terraform:  ] 2025-05-13 19:05:31.709282 | orchestrator | 19:05:31.709 STDOUT terraform:  + enable_dhcp = true 2025-05-13 19:05:31.709313 | orchestrator | 19:05:31.709 STDOUT terraform:  + gateway_ip = (known after apply) 2025-05-13 19:05:31.709345 | orchestrator | 19:05:31.709 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.709354 | orchestrator | 19:05:31.709 STDOUT terraform:  + ip_version = 4 2025-05-13 19:05:31.709391 | orchestrator | 19:05:31.709 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-05-13 19:05:31.709425 | orchestrator | 19:05:31.709 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-05-13 19:05:31.709462 | orchestrator | 19:05:31.709 STDOUT terraform:  + name = "subnet-testbed-management" 2025-05-13 19:05:31.709487 | orchestrator | 19:05:31.709 STDOUT terraform:  + network_id = (known after apply) 2025-05-13 19:05:31.709495 | orchestrator | 19:05:31.709 STDOUT terraform:  + no_gateway = false 2025-05-13 19:05:31.709538 | orchestrator | 19:05:31.709 STDOUT terraform:  + region = (known after apply) 2025-05-13 19:05:31.709565 | orchestrator | 19:05:31.709 STDOUT terraform:  + service_types = (known after apply) 2025-05-13 19:05:31.709592 | orchestrator | 19:05:31.709 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-13 19:05:31.709609 | orchestrator | 19:05:31.709 STDOUT terraform:  + allocation_pool { 2025-05-13 19:05:31.709617 | orchestrator | 19:05:31.709 STDOUT terraform:  + end = "192.168.31.250" 2025-05-13 19:05:31.709647 | orchestrator | 19:05:31.709 STDOUT terraform:  + start = "192.168.31.200" 2025-05-13 19:05:31.709654 | orchestrator | 19:05:31.709 STDOUT terraform:  } 2025-05-13 19:05:31.709659 | orchestrator | 19:05:31.709 STDOUT terraform:  } 2025-05-13 19:05:31.709707 | orchestrator | 19:05:31.709 STDOUT terraform:  # terraform_data.image will be created 2025-05-13 19:05:31.709753 | orchestrator | 19:05:31.709 STDOUT terraform:  + resource "terraform_data" "image" { 2025-05-13 19:05:31.709765 | orchestrator | 19:05:31.709 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.709789 | orchestrator | 19:05:31.709 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-13 19:05:31.709797 | orchestrator | 19:05:31.709 STDOUT terraform:  + output = (known after apply) 2025-05-13 19:05:31.709815 | orchestrator | 19:05:31.709 STDOUT terraform:  } 2025-05-13 19:05:31.709881 | orchestrator | 19:05:31.709 STDOUT terraform:  # terraform_data.image_node will be created 2025-05-13 19:05:31.709887 | orchestrator | 19:05:31.709 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-05-13 19:05:31.709893 | orchestrator | 19:05:31.709 STDOUT terraform:  + id = (known after apply) 2025-05-13 19:05:31.709898 | orchestrator | 19:05:31.709 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-13 19:05:31.709933 | orchestrator | 19:05:31.709 STDOUT terraform:  + output = (known after apply) 2025-05-13 19:05:31.709939 | orchestrator | 19:05:31.709 STDOUT terraform:  } 2025-05-13 19:05:31.709973 | orchestrator | 19:05:31.709 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-05-13 19:05:31.709979 | orchestrator | 19:05:31.709 STDOUT terraform: Changes to Outputs: 2025-05-13 19:05:31.710009 | orchestrator | 19:05:31.709 STDOUT terraform:  + manager_address = (sensitive value) 2025-05-13 19:05:31.710041 | orchestrator | 19:05:31.710 STDOUT terraform:  + private_key = (sensitive value) 2025-05-13 19:05:31.893537 | orchestrator | 19:05:31.892 STDOUT terraform: terraform_data.image: Creating... 2025-05-13 19:05:31.893640 | orchestrator | 19:05:31.892 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=0fa01b83-c9a8-c463-d9fb-857c36259621] 2025-05-13 19:05:31.893658 | orchestrator | 19:05:31.893 STDOUT terraform: terraform_data.image_node: Creating... 2025-05-13 19:05:31.894532 | orchestrator | 19:05:31.894 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=e6ca59ab-4d8d-77aa-eaf7-a07dab19243f] 2025-05-13 19:05:31.910249 | orchestrator | 19:05:31.910 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-05-13 19:05:31.910509 | orchestrator | 19:05:31.910 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-05-13 19:05:31.920987 | orchestrator | 19:05:31.920 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-05-13 19:05:31.921811 | orchestrator | 19:05:31.921 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-05-13 19:05:31.926624 | orchestrator | 19:05:31.926 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-05-13 19:05:31.927307 | orchestrator | 19:05:31.927 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-05-13 19:05:31.927886 | orchestrator | 19:05:31.927 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-05-13 19:05:31.929118 | orchestrator | 19:05:31.928 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-05-13 19:05:31.929552 | orchestrator | 19:05:31.929 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-05-13 19:05:31.935866 | orchestrator | 19:05:31.935 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-05-13 19:05:32.405608 | orchestrator | 19:05:32.405 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-13 19:05:32.415287 | orchestrator | 19:05:32.415 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-05-13 19:05:32.418338 | orchestrator | 19:05:32.418 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-13 19:05:32.426282 | orchestrator | 19:05:32.426 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-05-13 19:05:33.650907 | orchestrator | 19:05:33.650 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 2s [id=testbed] 2025-05-13 19:05:33.658795 | orchestrator | 19:05:33.658 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-05-13 19:05:39.585989 | orchestrator | 19:05:39.585 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 8s [id=ba19ae52-6ece-4a35-a41e-29db33a8ea1c] 2025-05-13 19:05:39.598381 | orchestrator | 19:05:39.598 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-05-13 19:05:41.922724 | orchestrator | 19:05:41.922 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-05-13 19:05:41.929927 | orchestrator | 19:05:41.929 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-05-13 19:05:41.930107 | orchestrator | 19:05:41.929 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-05-13 19:05:41.930281 | orchestrator | 19:05:41.930 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-05-13 19:05:41.930425 | orchestrator | 19:05:41.930 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-05-13 19:05:41.936273 | orchestrator | 19:05:41.935 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-05-13 19:05:42.417365 | orchestrator | 19:05:42.416 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-05-13 19:05:42.427555 | orchestrator | 19:05:42.427 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-05-13 19:05:42.491114 | orchestrator | 19:05:42.490 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=0bd34d58-f920-45be-9e9c-4745e29ec711] 2025-05-13 19:05:42.503362 | orchestrator | 19:05:42.502 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=97094a75-4993-40db-897e-adadcd017b36] 2025-05-13 19:05:42.505210 | orchestrator | 19:05:42.505 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-05-13 19:05:42.511310 | orchestrator | 19:05:42.511 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-05-13 19:05:42.517655 | orchestrator | 19:05:42.517 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=9d4a667e-1daa-4ea2-845b-5122e74908eb] 2025-05-13 19:05:42.527709 | orchestrator | 19:05:42.527 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-05-13 19:05:42.541947 | orchestrator | 19:05:42.541 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=e87b71fc-701a-46cb-bbd9-3f15f37c3043] 2025-05-13 19:05:42.547543 | orchestrator | 19:05:42.547 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-05-13 19:05:42.553007 | orchestrator | 19:05:42.552 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=34a01356-b2ad-4692-b4fa-0e371ae7ecbd] 2025-05-13 19:05:42.557988 | orchestrator | 19:05:42.557 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-05-13 19:05:42.567012 | orchestrator | 19:05:42.566 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=10c33077-7b2d-46df-acf0-04e3d7859f61] 2025-05-13 19:05:42.572395 | orchestrator | 19:05:42.572 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-05-13 19:05:42.603878 | orchestrator | 19:05:42.603 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 11s [id=5a89f530-918e-4949-9347-1038fd288b0d] 2025-05-13 19:05:42.617065 | orchestrator | 19:05:42.616 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-05-13 19:05:42.619942 | orchestrator | 19:05:42.619 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=04d2f464-e449-42d7-9ceb-0224b6b42ef4] 2025-05-13 19:05:42.622453 | orchestrator | 19:05:42.622 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=425598fa119ed3b02bc2f997183cf9a62096ff80] 2025-05-13 19:05:42.627147 | orchestrator | 19:05:42.627 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-05-13 19:05:42.630479 | orchestrator | 19:05:42.629 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-05-13 19:05:42.632719 | orchestrator | 19:05:42.632 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=dfbc51d5cd9f947ec2281fc9895f85181dd961d1] 2025-05-13 19:05:43.660782 | orchestrator | 19:05:43.660 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-05-13 19:05:43.834541 | orchestrator | 19:05:43.834 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=ca00bcd5-8e8a-4b90-8497-af6d74b86161] 2025-05-13 19:05:49.480159 | orchestrator | 19:05:49.479 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=c0046335-b447-4510-bf78-29606a4ec7e4] 2025-05-13 19:05:49.489960 | orchestrator | 19:05:49.489 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-05-13 19:05:49.598985 | orchestrator | 19:05:49.598 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-05-13 19:05:49.903653 | orchestrator | 19:05:49.903 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=6d8121aa-6ca9-42c7-878e-7472efa518ca] 2025-05-13 19:05:52.507003 | orchestrator | 19:05:52.506 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-05-13 19:05:52.512278 | orchestrator | 19:05:52.511 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-05-13 19:05:52.528644 | orchestrator | 19:05:52.528 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-05-13 19:05:52.549047 | orchestrator | 19:05:52.548 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-05-13 19:05:52.559403 | orchestrator | 19:05:52.559 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-05-13 19:05:52.573647 | orchestrator | 19:05:52.573 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-05-13 19:05:52.861144 | orchestrator | 19:05:52.860 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b] 2025-05-13 19:05:52.882482 | orchestrator | 19:05:52.881 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=01f96eca-3323-4c61-8f0f-6c13d6bd13ea] 2025-05-13 19:05:52.904862 | orchestrator | 19:05:52.904 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=41c94169-cd66-4abb-b62b-5ec1ccb982a2] 2025-05-13 19:05:52.925830 | orchestrator | 19:05:52.925 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=0d5abef6-0ff0-4989-a4ff-307849d725af] 2025-05-13 19:05:52.938548 | orchestrator | 19:05:52.938 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=8ed24ab4-c68b-4a4d-ac28-b638953962bf] 2025-05-13 19:05:52.970410 | orchestrator | 19:05:52.970 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=c460569b-e51c-4942-937b-b14be4f74e25] 2025-05-13 19:05:56.781864 | orchestrator | 19:05:56.781 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=606f5846-3c3a-4240-a085-1505ac740611] 2025-05-13 19:05:56.787369 | orchestrator | 19:05:56.787 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-05-13 19:05:56.788391 | orchestrator | 19:05:56.788 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-05-13 19:05:56.791203 | orchestrator | 19:05:56.790 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-05-13 19:05:56.911968 | orchestrator | 19:05:56.911 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=07f8395e-30b4-4d30-abef-d8dde3bd94a5] 2025-05-13 19:05:56.920573 | orchestrator | 19:05:56.920 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-05-13 19:05:56.920622 | orchestrator | 19:05:56.920 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-05-13 19:05:56.922679 | orchestrator | 19:05:56.922 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-05-13 19:05:56.926154 | orchestrator | 19:05:56.925 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-05-13 19:05:56.926189 | orchestrator | 19:05:56.926 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-05-13 19:05:56.933537 | orchestrator | 19:05:56.933 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-05-13 19:05:56.947308 | orchestrator | 19:05:56.947 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=0d07c3fa-0ab0-4120-bd51-310cd429876c] 2025-05-13 19:05:56.952074 | orchestrator | 19:05:56.951 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-05-13 19:05:56.953382 | orchestrator | 19:05:56.953 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-05-13 19:05:56.955052 | orchestrator | 19:05:56.954 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-05-13 19:05:57.027791 | orchestrator | 19:05:57.027 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=452619a4-6204-47cc-8c0d-19d3a2d31826] 2025-05-13 19:05:57.036386 | orchestrator | 19:05:57.036 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-05-13 19:05:57.054217 | orchestrator | 19:05:57.053 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=355d9bc3-e568-4807-944b-7b25177b2ebd] 2025-05-13 19:05:57.071082 | orchestrator | 19:05:57.070 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-05-13 19:05:57.155363 | orchestrator | 19:05:57.154 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=7c8ba179-30de-4dba-b11e-24f938c7b20c] 2025-05-13 19:05:57.167493 | orchestrator | 19:05:57.167 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-05-13 19:05:57.196451 | orchestrator | 19:05:57.196 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=d56b0975-acf4-4631-a6c0-e62cfe206077] 2025-05-13 19:05:57.213496 | orchestrator | 19:05:57.213 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-05-13 19:05:57.272143 | orchestrator | 19:05:57.271 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=8efb6e52-3a92-416e-bd43-8493a06bf7dd] 2025-05-13 19:05:57.287466 | orchestrator | 19:05:57.287 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-05-13 19:05:57.348560 | orchestrator | 19:05:57.348 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=d0a7b4f2-1681-43d7-916b-73208480d17d] 2025-05-13 19:05:57.357030 | orchestrator | 19:05:57.356 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-05-13 19:05:57.558099 | orchestrator | 19:05:57.557 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=2968f973-c17b-48ff-808f-7c3bca15b827] 2025-05-13 19:05:57.575927 | orchestrator | 19:05:57.575 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-05-13 19:05:57.596104 | orchestrator | 19:05:57.595 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=356f4213-7114-4458-9540-f7e01cc5c5c2] 2025-05-13 19:05:57.714252 | orchestrator | 19:05:57.713 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=0d93af16-6174-41e6-b9e3-bf82adefc6fa] 2025-05-13 19:06:02.778745 | orchestrator | 19:06:02.778 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=2533edc6-ae90-48f5-bce3-d234f05de5b8] 2025-05-13 19:06:03.000825 | orchestrator | 19:06:03.000 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=9f6ad65d-bc63-4a19-8402-f2972f2671f4] 2025-05-13 19:06:03.059826 | orchestrator | 19:06:03.059 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=a1e09806-0bbd-4a75-915e-9fc887682f6e] 2025-05-13 19:06:03.253491 | orchestrator | 19:06:03.253 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=841e22fb-5248-4ad5-bc51-74f7a44481ee] 2025-05-13 19:06:03.370657 | orchestrator | 19:06:03.370 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=8c7357bb-6500-4ea2-92c4-d3e21060221f] 2025-05-13 19:06:03.515626 | orchestrator | 19:06:03.515 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 7s [id=363d5a9d-48f8-4ee9-a4ff-022dd31dba0b] 2025-05-13 19:06:03.598533 | orchestrator | 19:06:03.598 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=5319c854-daf7-41b1-8578-6a9aa5e06bf0] 2025-05-13 19:06:04.191976 | orchestrator | 19:06:04.191 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=2f2789ac-1c16-4ac7-9f34-697b385d08bd] 2025-05-13 19:06:04.211893 | orchestrator | 19:06:04.211 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-05-13 19:06:04.225249 | orchestrator | 19:06:04.225 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-05-13 19:06:04.225329 | orchestrator | 19:06:04.225 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-05-13 19:06:04.225636 | orchestrator | 19:06:04.225 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-05-13 19:06:04.229456 | orchestrator | 19:06:04.229 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-05-13 19:06:04.238091 | orchestrator | 19:06:04.237 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-05-13 19:06:04.238940 | orchestrator | 19:06:04.238 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-05-13 19:06:10.497752 | orchestrator | 19:06:10.497 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 6s [id=0ee376ae-6cb4-43b2-b45e-69a9ea34c3c3] 2025-05-13 19:06:10.508994 | orchestrator | 19:06:10.508 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-05-13 19:06:10.517049 | orchestrator | 19:06:10.516 STDOUT terraform: local_file.inventory: Creating... 2025-05-13 19:06:10.520549 | orchestrator | 19:06:10.520 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-05-13 19:06:10.521277 | orchestrator | 19:06:10.521 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=de19ade0b80a5af7609fcb2061fe520d8b719bf9] 2025-05-13 19:06:10.524478 | orchestrator | 19:06:10.524 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=73a46f42278359ae84c8f382b3ae94b833c2e45a] 2025-05-13 19:06:10.998001 | orchestrator | 19:06:10.997 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=0ee376ae-6cb4-43b2-b45e-69a9ea34c3c3] 2025-05-13 19:06:14.228224 | orchestrator | 19:06:14.227 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-05-13 19:06:14.228420 | orchestrator | 19:06:14.228 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-05-13 19:06:14.228448 | orchestrator | 19:06:14.228 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-05-13 19:06:14.237978 | orchestrator | 19:06:14.237 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-05-13 19:06:14.242268 | orchestrator | 19:06:14.241 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-05-13 19:06:14.242396 | orchestrator | 19:06:14.242 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-05-13 19:06:24.231641 | orchestrator | 19:06:24.231 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-05-13 19:06:24.231822 | orchestrator | 19:06:24.231 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-05-13 19:06:24.231852 | orchestrator | 19:06:24.231 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-05-13 19:06:24.238913 | orchestrator | 19:06:24.238 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-05-13 19:06:24.242305 | orchestrator | 19:06:24.241 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-05-13 19:06:24.242597 | orchestrator | 19:06:24.242 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-05-13 19:06:24.644581 | orchestrator | 19:06:24.644 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=cf4b5387-d93c-47cb-bee6-d48e1a031021] 2025-05-13 19:06:24.678344 | orchestrator | 19:06:24.677 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=79badc24-cdbf-46ce-b5ec-c84d7ffe17df] 2025-05-13 19:06:24.707005 | orchestrator | 19:06:24.706 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=6650cd62-8b1a-442a-bdef-e297e8261a59] 2025-05-13 19:06:34.233547 | orchestrator | 19:06:34.233 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-05-13 19:06:34.233630 | orchestrator | 19:06:34.233 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-05-13 19:06:34.243271 | orchestrator | 19:06:34.242 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-05-13 19:06:34.830917 | orchestrator | 19:06:34.830 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=2ddaec9f-306a-43c7-aa78-44a940d2a34d] 2025-05-13 19:06:34.875872 | orchestrator | 19:06:34.875 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=f225cad2-98e0-4a79-99d7-8db14e9279d5] 2025-05-13 19:06:34.895934 | orchestrator | 19:06:34.895 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=56878305-f6ec-4134-a8e7-3c68849a0b17] 2025-05-13 19:06:34.910147 | orchestrator | 19:06:34.909 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-05-13 19:06:34.919923 | orchestrator | 19:06:34.919 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=1679451368275538026] 2025-05-13 19:06:34.925150 | orchestrator | 19:06:34.924 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-05-13 19:06:34.926619 | orchestrator | 19:06:34.926 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-05-13 19:06:34.929026 | orchestrator | 19:06:34.928 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-05-13 19:06:34.934576 | orchestrator | 19:06:34.934 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-05-13 19:06:34.938720 | orchestrator | 19:06:34.938 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-05-13 19:06:34.941850 | orchestrator | 19:06:34.941 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-05-13 19:06:34.953287 | orchestrator | 19:06:34.953 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-05-13 19:06:34.958062 | orchestrator | 19:06:34.957 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-05-13 19:06:34.965660 | orchestrator | 19:06:34.965 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-05-13 19:06:34.970106 | orchestrator | 19:06:34.969 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-05-13 19:06:40.243113 | orchestrator | 19:06:40.242 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=56878305-f6ec-4134-a8e7-3c68849a0b17/04d2f464-e449-42d7-9ceb-0224b6b42ef4] 2025-05-13 19:06:40.251770 | orchestrator | 19:06:40.251 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=6650cd62-8b1a-442a-bdef-e297e8261a59/10c33077-7b2d-46df-acf0-04e3d7859f61] 2025-05-13 19:06:40.267900 | orchestrator | 19:06:40.267 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=79badc24-cdbf-46ce-b5ec-c84d7ffe17df/9d4a667e-1daa-4ea2-845b-5122e74908eb] 2025-05-13 19:06:40.295990 | orchestrator | 19:06:40.295 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=56878305-f6ec-4134-a8e7-3c68849a0b17/ca00bcd5-8e8a-4b90-8497-af6d74b86161] 2025-05-13 19:06:40.297720 | orchestrator | 19:06:40.297 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=6650cd62-8b1a-442a-bdef-e297e8261a59/5a89f530-918e-4949-9347-1038fd288b0d] 2025-05-13 19:06:40.304564 | orchestrator | 19:06:40.304 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=79badc24-cdbf-46ce-b5ec-c84d7ffe17df/97094a75-4993-40db-897e-adadcd017b36] 2025-05-13 19:06:40.331184 | orchestrator | 19:06:40.330 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=79badc24-cdbf-46ce-b5ec-c84d7ffe17df/e87b71fc-701a-46cb-bbd9-3f15f37c3043] 2025-05-13 19:06:40.331925 | orchestrator | 19:06:40.331 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=6650cd62-8b1a-442a-bdef-e297e8261a59/0bd34d58-f920-45be-9e9c-4745e29ec711] 2025-05-13 19:06:40.333814 | orchestrator | 19:06:40.333 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=56878305-f6ec-4134-a8e7-3c68849a0b17/34a01356-b2ad-4692-b4fa-0e371ae7ecbd] 2025-05-13 19:06:44.961802 | orchestrator | 19:06:44.961 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-05-13 19:06:54.961819 | orchestrator | 19:06:54.961 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-05-13 19:06:55.377281 | orchestrator | 19:06:55.377 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=18b89dc0-1d4c-40de-a344-b5addb0be51a] 2025-05-13 19:06:55.399268 | orchestrator | 19:06:55.398 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-05-13 19:06:55.399340 | orchestrator | 19:06:55.399 STDOUT terraform: Outputs: 2025-05-13 19:06:55.399351 | orchestrator | 19:06:55.399 STDOUT terraform: manager_address = 2025-05-13 19:06:55.399360 | orchestrator | 19:06:55.399 STDOUT terraform: private_key = 2025-05-13 19:06:55.667975 | orchestrator | ok: Runtime: 0:01:34.366780 2025-05-13 19:06:55.704786 | 2025-05-13 19:06:55.704907 | TASK [Create infrastructure (stable)] 2025-05-13 19:06:56.239457 | orchestrator | skipping: Conditional result was False 2025-05-13 19:06:56.257280 | 2025-05-13 19:06:56.257441 | TASK [Fetch manager address] 2025-05-13 19:06:56.693218 | orchestrator | ok 2025-05-13 19:06:56.703302 | 2025-05-13 19:06:56.703429 | TASK [Set manager_host address] 2025-05-13 19:06:56.783437 | orchestrator | ok 2025-05-13 19:06:56.792847 | 2025-05-13 19:06:56.792999 | LOOP [Update ansible collections] 2025-05-13 19:06:57.682046 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-13 19:06:57.682389 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-13 19:06:57.682447 | orchestrator | Starting galaxy collection install process 2025-05-13 19:06:57.682489 | orchestrator | Process install dependency map 2025-05-13 19:06:57.682526 | orchestrator | Starting collection install process 2025-05-13 19:06:57.682561 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2025-05-13 19:06:57.682603 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2025-05-13 19:06:57.682644 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-05-13 19:06:57.682710 | orchestrator | ok: Item: commons Runtime: 0:00:00.575096 2025-05-13 19:06:58.570134 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-13 19:06:58.570310 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-13 19:06:58.570357 | orchestrator | Starting galaxy collection install process 2025-05-13 19:06:58.570392 | orchestrator | Process install dependency map 2025-05-13 19:06:58.570440 | orchestrator | Starting collection install process 2025-05-13 19:06:58.570471 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2025-05-13 19:06:58.570500 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2025-05-13 19:06:58.570528 | orchestrator | osism.services:999.0.0 was installed successfully 2025-05-13 19:06:58.570574 | orchestrator | ok: Item: services Runtime: 0:00:00.600842 2025-05-13 19:06:58.594046 | 2025-05-13 19:06:58.594197 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-13 19:07:09.154985 | orchestrator | ok 2025-05-13 19:07:09.164857 | 2025-05-13 19:07:09.165006 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-13 19:08:09.199709 | orchestrator | ok 2025-05-13 19:08:09.211009 | 2025-05-13 19:08:09.211164 | TASK [Fetch manager ssh hostkey] 2025-05-13 19:08:10.792034 | orchestrator | Output suppressed because no_log was given 2025-05-13 19:08:10.806389 | 2025-05-13 19:08:10.806580 | TASK [Get ssh keypair from terraform environment] 2025-05-13 19:08:11.344666 | orchestrator | ok: Runtime: 0:00:00.008848 2025-05-13 19:08:11.361252 | 2025-05-13 19:08:11.361431 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-13 19:08:11.410540 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-05-13 19:08:11.420480 | 2025-05-13 19:08:11.420609 | TASK [Run manager part 0] 2025-05-13 19:08:12.210950 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-13 19:08:12.256889 | orchestrator | 2025-05-13 19:08:12.256928 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-05-13 19:08:12.256934 | orchestrator | 2025-05-13 19:08:12.256946 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-05-13 19:08:14.041942 | orchestrator | ok: [testbed-manager] 2025-05-13 19:08:14.042091 | orchestrator | 2025-05-13 19:08:14.042136 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-13 19:08:14.042153 | orchestrator | 2025-05-13 19:08:14.042166 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-13 19:08:16.080953 | orchestrator | ok: [testbed-manager] 2025-05-13 19:08:16.081114 | orchestrator | 2025-05-13 19:08:16.081133 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-13 19:08:16.753053 | orchestrator | ok: [testbed-manager] 2025-05-13 19:08:16.753114 | orchestrator | 2025-05-13 19:08:16.753121 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-13 19:08:16.804758 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:08:16.804821 | orchestrator | 2025-05-13 19:08:16.804834 | orchestrator | TASK [Update package cache] **************************************************** 2025-05-13 19:08:16.834860 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:08:16.834899 | orchestrator | 2025-05-13 19:08:16.834905 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-13 19:08:16.859966 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:08:16.859999 | orchestrator | 2025-05-13 19:08:16.860004 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-13 19:08:16.890392 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:08:16.890434 | orchestrator | 2025-05-13 19:08:16.890439 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-13 19:08:16.920290 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:08:16.920331 | orchestrator | 2025-05-13 19:08:16.920338 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-05-13 19:08:16.949361 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:08:16.949397 | orchestrator | 2025-05-13 19:08:16.949405 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-05-13 19:08:16.982951 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:08:16.982993 | orchestrator | 2025-05-13 19:08:16.983000 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-05-13 19:08:17.837034 | orchestrator | changed: [testbed-manager] 2025-05-13 19:08:17.837105 | orchestrator | 2025-05-13 19:08:17.837113 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-05-13 19:11:11.743667 | orchestrator | changed: [testbed-manager] 2025-05-13 19:11:11.743734 | orchestrator | 2025-05-13 19:11:11.743746 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-13 19:12:36.432312 | orchestrator | changed: [testbed-manager] 2025-05-13 19:12:36.434785 | orchestrator | 2025-05-13 19:12:36.434818 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-13 19:12:57.046085 | orchestrator | changed: [testbed-manager] 2025-05-13 19:12:57.046208 | orchestrator | 2025-05-13 19:12:57.046228 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-13 19:13:06.185234 | orchestrator | changed: [testbed-manager] 2025-05-13 19:13:06.185340 | orchestrator | 2025-05-13 19:13:06.185356 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-13 19:13:06.235049 | orchestrator | ok: [testbed-manager] 2025-05-13 19:13:06.235122 | orchestrator | 2025-05-13 19:13:06.235132 | orchestrator | TASK [Get current user] ******************************************************** 2025-05-13 19:13:07.076309 | orchestrator | ok: [testbed-manager] 2025-05-13 19:13:07.076398 | orchestrator | 2025-05-13 19:13:07.076415 | orchestrator | TASK [Create venv directory] *************************************************** 2025-05-13 19:13:07.836474 | orchestrator | changed: [testbed-manager] 2025-05-13 19:13:07.836582 | orchestrator | 2025-05-13 19:13:07.836599 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-05-13 19:13:14.335116 | orchestrator | changed: [testbed-manager] 2025-05-13 19:13:14.335206 | orchestrator | 2025-05-13 19:13:14.335242 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-05-13 19:13:20.499911 | orchestrator | changed: [testbed-manager] 2025-05-13 19:13:20.500029 | orchestrator | 2025-05-13 19:13:20.500058 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-05-13 19:13:23.214549 | orchestrator | changed: [testbed-manager] 2025-05-13 19:13:23.214704 | orchestrator | 2025-05-13 19:13:23.214721 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-05-13 19:13:24.923124 | orchestrator | changed: [testbed-manager] 2025-05-13 19:13:24.923221 | orchestrator | 2025-05-13 19:13:24.923238 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-05-13 19:13:26.065562 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-13 19:13:26.065680 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-13 19:13:26.065696 | orchestrator | 2025-05-13 19:13:26.065709 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-05-13 19:13:26.109451 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-13 19:13:26.109538 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-13 19:13:26.109553 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-13 19:13:26.109565 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-13 19:13:29.319739 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-13 19:13:29.319813 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-13 19:13:29.319822 | orchestrator | 2025-05-13 19:13:29.319829 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-05-13 19:13:29.904038 | orchestrator | changed: [testbed-manager] 2025-05-13 19:13:29.904082 | orchestrator | 2025-05-13 19:13:29.904090 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-05-13 19:16:49.221769 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-05-13 19:16:49.221888 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-05-13 19:16:49.221904 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-05-13 19:16:49.221915 | orchestrator | 2025-05-13 19:16:49.221925 | orchestrator | TASK [Install local collections] *********************************************** 2025-05-13 19:16:51.603370 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-05-13 19:16:51.603472 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-05-13 19:16:51.603486 | orchestrator | 2025-05-13 19:16:51.603499 | orchestrator | PLAY [Create operator user] **************************************************** 2025-05-13 19:16:51.603512 | orchestrator | 2025-05-13 19:16:51.603523 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-13 19:16:53.023026 | orchestrator | ok: [testbed-manager] 2025-05-13 19:16:53.023201 | orchestrator | 2025-05-13 19:16:53.023223 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-13 19:16:53.080042 | orchestrator | ok: [testbed-manager] 2025-05-13 19:16:53.080123 | orchestrator | 2025-05-13 19:16:53.080134 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-13 19:16:53.153156 | orchestrator | ok: [testbed-manager] 2025-05-13 19:16:53.153218 | orchestrator | 2025-05-13 19:16:53.153224 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-13 19:16:53.981226 | orchestrator | changed: [testbed-manager] 2025-05-13 19:16:53.981270 | orchestrator | 2025-05-13 19:16:53.981277 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-13 19:16:54.737631 | orchestrator | changed: [testbed-manager] 2025-05-13 19:16:54.737677 | orchestrator | 2025-05-13 19:16:54.737685 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-13 19:16:56.158114 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-05-13 19:16:56.158240 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-05-13 19:16:56.158258 | orchestrator | 2025-05-13 19:16:56.158287 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-13 19:16:57.583609 | orchestrator | changed: [testbed-manager] 2025-05-13 19:16:57.583690 | orchestrator | 2025-05-13 19:16:57.583702 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-13 19:16:59.334001 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-05-13 19:16:59.334166 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-05-13 19:16:59.334182 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-05-13 19:16:59.334194 | orchestrator | 2025-05-13 19:16:59.334206 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-13 19:16:59.923235 | orchestrator | changed: [testbed-manager] 2025-05-13 19:16:59.923334 | orchestrator | 2025-05-13 19:16:59.923351 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-13 19:16:59.995018 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:16:59.995106 | orchestrator | 2025-05-13 19:16:59.995121 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-13 19:17:00.911602 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-13 19:17:00.911701 | orchestrator | changed: [testbed-manager] 2025-05-13 19:17:00.911716 | orchestrator | 2025-05-13 19:17:00.911729 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-13 19:17:00.944629 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:17:00.944733 | orchestrator | 2025-05-13 19:17:00.944749 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-13 19:17:00.987110 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:17:00.987193 | orchestrator | 2025-05-13 19:17:00.987208 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-13 19:17:01.029060 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:17:01.029156 | orchestrator | 2025-05-13 19:17:01.029172 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-13 19:17:01.087442 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:17:01.087539 | orchestrator | 2025-05-13 19:17:01.087564 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-13 19:17:01.810218 | orchestrator | ok: [testbed-manager] 2025-05-13 19:17:01.810318 | orchestrator | 2025-05-13 19:17:01.810335 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-13 19:17:01.810348 | orchestrator | 2025-05-13 19:17:01.810362 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-13 19:17:03.235718 | orchestrator | ok: [testbed-manager] 2025-05-13 19:17:03.235848 | orchestrator | 2025-05-13 19:17:03.235866 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-05-13 19:17:04.210930 | orchestrator | changed: [testbed-manager] 2025-05-13 19:17:04.211031 | orchestrator | 2025-05-13 19:17:04.211047 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:17:04.211061 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-05-13 19:17:04.211072 | orchestrator | 2025-05-13 19:17:04.806573 | orchestrator | ok: Runtime: 0:08:52.621986 2025-05-13 19:17:04.828147 | 2025-05-13 19:17:04.828342 | TASK [Point out that the log in on the manager is now possible] 2025-05-13 19:17:04.873001 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-05-13 19:17:04.880629 | 2025-05-13 19:17:04.880733 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-13 19:17:04.910519 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-05-13 19:17:04.917389 | 2025-05-13 19:17:04.917495 | TASK [Run manager part 1 + 2] 2025-05-13 19:17:05.804933 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-13 19:17:05.862183 | orchestrator | 2025-05-13 19:17:05.862240 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-05-13 19:17:05.862248 | orchestrator | 2025-05-13 19:17:05.862262 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-13 19:17:08.743251 | orchestrator | ok: [testbed-manager] 2025-05-13 19:17:08.743316 | orchestrator | 2025-05-13 19:17:08.743339 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-13 19:17:08.784687 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:17:08.784747 | orchestrator | 2025-05-13 19:17:08.784759 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-13 19:17:08.828520 | orchestrator | ok: [testbed-manager] 2025-05-13 19:17:08.828580 | orchestrator | 2025-05-13 19:17:08.828590 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-13 19:17:08.878707 | orchestrator | ok: [testbed-manager] 2025-05-13 19:17:08.878770 | orchestrator | 2025-05-13 19:17:08.878781 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-13 19:17:08.955221 | orchestrator | ok: [testbed-manager] 2025-05-13 19:17:08.955278 | orchestrator | 2025-05-13 19:17:08.955289 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-13 19:17:09.020046 | orchestrator | ok: [testbed-manager] 2025-05-13 19:17:09.020103 | orchestrator | 2025-05-13 19:17:09.020113 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-13 19:17:09.065075 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-05-13 19:17:09.065127 | orchestrator | 2025-05-13 19:17:09.065132 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-13 19:17:09.806212 | orchestrator | ok: [testbed-manager] 2025-05-13 19:17:09.806287 | orchestrator | 2025-05-13 19:17:09.806303 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-13 19:17:09.861260 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:17:09.861320 | orchestrator | 2025-05-13 19:17:09.861329 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-13 19:17:11.254185 | orchestrator | changed: [testbed-manager] 2025-05-13 19:17:11.254260 | orchestrator | 2025-05-13 19:17:11.254279 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-13 19:17:11.851671 | orchestrator | ok: [testbed-manager] 2025-05-13 19:17:11.851732 | orchestrator | 2025-05-13 19:17:11.851741 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-13 19:17:13.025012 | orchestrator | changed: [testbed-manager] 2025-05-13 19:17:13.025094 | orchestrator | 2025-05-13 19:17:13.025121 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-13 19:17:25.936747 | orchestrator | changed: [testbed-manager] 2025-05-13 19:17:25.936986 | orchestrator | 2025-05-13 19:17:25.937009 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-13 19:17:26.602364 | orchestrator | ok: [testbed-manager] 2025-05-13 19:17:26.602475 | orchestrator | 2025-05-13 19:17:26.602496 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-13 19:17:26.694142 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:17:26.694239 | orchestrator | 2025-05-13 19:17:26.694255 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-05-13 19:17:27.664106 | orchestrator | changed: [testbed-manager] 2025-05-13 19:17:27.664208 | orchestrator | 2025-05-13 19:17:27.664224 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-05-13 19:17:28.679393 | orchestrator | changed: [testbed-manager] 2025-05-13 19:17:28.679539 | orchestrator | 2025-05-13 19:17:28.679564 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-05-13 19:17:29.263150 | orchestrator | changed: [testbed-manager] 2025-05-13 19:17:29.263293 | orchestrator | 2025-05-13 19:17:29.263307 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-05-13 19:17:29.303957 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-13 19:17:29.304101 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-13 19:17:29.304115 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-13 19:17:29.304127 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-13 19:17:31.169077 | orchestrator | changed: [testbed-manager] 2025-05-13 19:17:31.169135 | orchestrator | 2025-05-13 19:17:31.169143 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-05-13 19:17:40.330565 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-05-13 19:17:40.330685 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-05-13 19:17:40.330710 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-05-13 19:17:40.330731 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-05-13 19:17:40.330751 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-05-13 19:17:40.330769 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-05-13 19:17:40.330787 | orchestrator | 2025-05-13 19:17:40.330805 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-05-13 19:17:41.400925 | orchestrator | changed: [testbed-manager] 2025-05-13 19:17:41.401026 | orchestrator | 2025-05-13 19:17:41.401042 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-05-13 19:17:41.447796 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:17:41.447920 | orchestrator | 2025-05-13 19:17:41.447947 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-05-13 19:17:44.556374 | orchestrator | changed: [testbed-manager] 2025-05-13 19:17:44.556461 | orchestrator | 2025-05-13 19:17:44.556477 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-05-13 19:17:44.602151 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:17:44.602253 | orchestrator | 2025-05-13 19:17:44.602268 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-05-13 19:19:19.733357 | orchestrator | changed: [testbed-manager] 2025-05-13 19:19:19.733400 | orchestrator | 2025-05-13 19:19:19.733407 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-13 19:19:20.956727 | orchestrator | ok: [testbed-manager] 2025-05-13 19:19:20.956779 | orchestrator | 2025-05-13 19:19:20.956789 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:19:20.956799 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-05-13 19:19:20.956806 | orchestrator | 2025-05-13 19:19:21.549603 | orchestrator | ok: Runtime: 0:02:15.841525 2025-05-13 19:19:21.568484 | 2025-05-13 19:19:21.568638 | TASK [Reboot manager] 2025-05-13 19:19:23.106684 | orchestrator | ok: Runtime: 0:00:00.971947 2025-05-13 19:19:23.123425 | 2025-05-13 19:19:23.123596 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-13 19:19:37.656491 | orchestrator | ok 2025-05-13 19:19:37.664409 | 2025-05-13 19:19:37.664525 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-13 19:20:37.710558 | orchestrator | ok 2025-05-13 19:20:37.720459 | 2025-05-13 19:20:37.720582 | TASK [Deploy manager + bootstrap nodes] 2025-05-13 19:20:40.320889 | orchestrator | 2025-05-13 19:20:40.321093 | orchestrator | # DEPLOY MANAGER 2025-05-13 19:20:40.321117 | orchestrator | 2025-05-13 19:20:40.321132 | orchestrator | + set -e 2025-05-13 19:20:40.321146 | orchestrator | + echo 2025-05-13 19:20:40.321160 | orchestrator | + echo '# DEPLOY MANAGER' 2025-05-13 19:20:40.321209 | orchestrator | + echo 2025-05-13 19:20:40.321267 | orchestrator | + cat /opt/manager-vars.sh 2025-05-13 19:20:40.324512 | orchestrator | export NUMBER_OF_NODES=6 2025-05-13 19:20:40.324560 | orchestrator | 2025-05-13 19:20:40.324573 | orchestrator | export CEPH_VERSION=reef 2025-05-13 19:20:40.324587 | orchestrator | export CONFIGURATION_VERSION=main 2025-05-13 19:20:40.324600 | orchestrator | export MANAGER_VERSION=latest 2025-05-13 19:20:40.324623 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-05-13 19:20:40.324634 | orchestrator | 2025-05-13 19:20:40.324653 | orchestrator | export ARA=false 2025-05-13 19:20:40.324666 | orchestrator | export TEMPEST=false 2025-05-13 19:20:40.324684 | orchestrator | export IS_ZUUL=true 2025-05-13 19:20:40.324696 | orchestrator | 2025-05-13 19:20:40.324714 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.173 2025-05-13 19:20:40.324726 | orchestrator | export EXTERNAL_API=false 2025-05-13 19:20:40.324737 | orchestrator | 2025-05-13 19:20:40.324759 | orchestrator | export IMAGE_USER=ubuntu 2025-05-13 19:20:40.324770 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-05-13 19:20:40.324781 | orchestrator | 2025-05-13 19:20:40.324797 | orchestrator | export CEPH_STACK=ceph-ansible 2025-05-13 19:20:40.324817 | orchestrator | 2025-05-13 19:20:40.324828 | orchestrator | + echo 2025-05-13 19:20:40.324839 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-13 19:20:40.325616 | orchestrator | ++ export INTERACTIVE=false 2025-05-13 19:20:40.325683 | orchestrator | ++ INTERACTIVE=false 2025-05-13 19:20:40.325696 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-13 19:20:40.325740 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-13 19:20:40.325766 | orchestrator | + source /opt/manager-vars.sh 2025-05-13 19:20:40.325778 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-13 19:20:40.325817 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-13 19:20:40.325830 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-13 19:20:40.325842 | orchestrator | ++ CEPH_VERSION=reef 2025-05-13 19:20:40.325854 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-13 19:20:40.325866 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-13 19:20:40.325878 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-13 19:20:40.325891 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-13 19:20:40.325903 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-13 19:20:40.325914 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-13 19:20:40.325925 | orchestrator | ++ export ARA=false 2025-05-13 19:20:40.325945 | orchestrator | ++ ARA=false 2025-05-13 19:20:40.325999 | orchestrator | ++ export TEMPEST=false 2025-05-13 19:20:40.326011 | orchestrator | ++ TEMPEST=false 2025-05-13 19:20:40.326067 | orchestrator | ++ export IS_ZUUL=true 2025-05-13 19:20:40.326079 | orchestrator | ++ IS_ZUUL=true 2025-05-13 19:20:40.326090 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.173 2025-05-13 19:20:40.326101 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.173 2025-05-13 19:20:40.326112 | orchestrator | ++ export EXTERNAL_API=false 2025-05-13 19:20:40.326123 | orchestrator | ++ EXTERNAL_API=false 2025-05-13 19:20:40.326139 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-13 19:20:40.326150 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-13 19:20:40.326161 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-13 19:20:40.326204 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-13 19:20:40.326215 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-13 19:20:40.326226 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-13 19:20:40.326238 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-05-13 19:20:40.382882 | orchestrator | + docker version 2025-05-13 19:20:40.661313 | orchestrator | Client: Docker Engine - Community 2025-05-13 19:20:40.661435 | orchestrator | Version: 27.5.1 2025-05-13 19:20:40.661456 | orchestrator | API version: 1.47 2025-05-13 19:20:40.661465 | orchestrator | Go version: go1.22.11 2025-05-13 19:20:40.661474 | orchestrator | Git commit: 9f9e405 2025-05-13 19:20:40.661485 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-05-13 19:20:40.661495 | orchestrator | OS/Arch: linux/amd64 2025-05-13 19:20:40.661503 | orchestrator | Context: default 2025-05-13 19:20:40.661511 | orchestrator | 2025-05-13 19:20:40.661520 | orchestrator | Server: Docker Engine - Community 2025-05-13 19:20:40.661528 | orchestrator | Engine: 2025-05-13 19:20:40.661537 | orchestrator | Version: 27.5.1 2025-05-13 19:20:40.661545 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-05-13 19:20:40.661553 | orchestrator | Go version: go1.22.11 2025-05-13 19:20:40.661562 | orchestrator | Git commit: 4c9b3b0 2025-05-13 19:20:40.661597 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-05-13 19:20:40.661606 | orchestrator | OS/Arch: linux/amd64 2025-05-13 19:20:40.661614 | orchestrator | Experimental: false 2025-05-13 19:20:40.661622 | orchestrator | containerd: 2025-05-13 19:20:40.661630 | orchestrator | Version: 1.7.27 2025-05-13 19:20:40.661638 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-05-13 19:20:40.661646 | orchestrator | runc: 2025-05-13 19:20:40.661654 | orchestrator | Version: 1.2.5 2025-05-13 19:20:40.661662 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-05-13 19:20:40.661671 | orchestrator | docker-init: 2025-05-13 19:20:40.661691 | orchestrator | Version: 0.19.0 2025-05-13 19:20:40.661699 | orchestrator | GitCommit: de40ad0 2025-05-13 19:20:40.664498 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-05-13 19:20:40.674609 | orchestrator | + set -e 2025-05-13 19:20:40.674684 | orchestrator | + source /opt/manager-vars.sh 2025-05-13 19:20:40.674699 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-13 19:20:40.674711 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-13 19:20:40.674722 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-13 19:20:40.674733 | orchestrator | ++ CEPH_VERSION=reef 2025-05-13 19:20:40.674747 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-13 19:20:40.674761 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-13 19:20:40.674773 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-13 19:20:40.674785 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-13 19:20:40.674796 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-13 19:20:40.674806 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-13 19:20:40.674818 | orchestrator | ++ export ARA=false 2025-05-13 19:20:40.674829 | orchestrator | ++ ARA=false 2025-05-13 19:20:40.674840 | orchestrator | ++ export TEMPEST=false 2025-05-13 19:20:40.674854 | orchestrator | ++ TEMPEST=false 2025-05-13 19:20:40.674877 | orchestrator | ++ export IS_ZUUL=true 2025-05-13 19:20:40.674904 | orchestrator | ++ IS_ZUUL=true 2025-05-13 19:20:40.674923 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.173 2025-05-13 19:20:40.674940 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.173 2025-05-13 19:20:40.674958 | orchestrator | ++ export EXTERNAL_API=false 2025-05-13 19:20:40.674975 | orchestrator | ++ EXTERNAL_API=false 2025-05-13 19:20:40.674993 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-13 19:20:40.675011 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-13 19:20:40.675029 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-13 19:20:40.675046 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-13 19:20:40.675065 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-13 19:20:40.675082 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-13 19:20:40.675099 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-13 19:20:40.675118 | orchestrator | ++ export INTERACTIVE=false 2025-05-13 19:20:40.675138 | orchestrator | ++ INTERACTIVE=false 2025-05-13 19:20:40.675156 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-13 19:20:40.675213 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-13 19:20:40.675232 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-13 19:20:40.675249 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-13 19:20:40.675268 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-05-13 19:20:40.680929 | orchestrator | + set -e 2025-05-13 19:20:40.680993 | orchestrator | + VERSION=reef 2025-05-13 19:20:40.681819 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-05-13 19:20:40.688264 | orchestrator | + [[ -n ceph_version: reef ]] 2025-05-13 19:20:40.688329 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-05-13 19:20:40.694672 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-05-13 19:20:40.699873 | orchestrator | + set -e 2025-05-13 19:20:40.699928 | orchestrator | + VERSION=2024.2 2025-05-13 19:20:40.700725 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-05-13 19:20:40.704850 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-05-13 19:20:40.704927 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-05-13 19:20:40.710546 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-05-13 19:20:40.711120 | orchestrator | ++ semver latest 7.0.0 2025-05-13 19:20:40.773827 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-13 19:20:40.773931 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-13 19:20:40.773949 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-05-13 19:20:40.773963 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-05-13 19:20:40.817039 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-13 19:20:40.819272 | orchestrator | + source /opt/venv/bin/activate 2025-05-13 19:20:40.820024 | orchestrator | ++ deactivate nondestructive 2025-05-13 19:20:40.820102 | orchestrator | ++ '[' -n '' ']' 2025-05-13 19:20:40.820147 | orchestrator | ++ '[' -n '' ']' 2025-05-13 19:20:40.820162 | orchestrator | ++ hash -r 2025-05-13 19:20:40.820244 | orchestrator | ++ '[' -n '' ']' 2025-05-13 19:20:40.820305 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-13 19:20:40.820319 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-13 19:20:40.820372 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-13 19:20:40.820396 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-13 19:20:40.820407 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-13 19:20:40.820444 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-13 19:20:40.820465 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-13 19:20:40.820487 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-13 19:20:40.820551 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-13 19:20:40.820571 | orchestrator | ++ export PATH 2025-05-13 19:20:40.820612 | orchestrator | ++ '[' -n '' ']' 2025-05-13 19:20:40.820626 | orchestrator | ++ '[' -z '' ']' 2025-05-13 19:20:40.820637 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-13 19:20:40.820648 | orchestrator | ++ PS1='(venv) ' 2025-05-13 19:20:40.820679 | orchestrator | ++ export PS1 2025-05-13 19:20:40.820705 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-13 19:20:40.820727 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-13 19:20:40.820742 | orchestrator | ++ hash -r 2025-05-13 19:20:40.820754 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-05-13 19:20:42.137979 | orchestrator | 2025-05-13 19:20:42.138216 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-05-13 19:20:42.138239 | orchestrator | 2025-05-13 19:20:42.138274 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-13 19:20:42.712762 | orchestrator | ok: [testbed-manager] 2025-05-13 19:20:42.712904 | orchestrator | 2025-05-13 19:20:42.712923 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-13 19:20:43.713888 | orchestrator | changed: [testbed-manager] 2025-05-13 19:20:43.714008 | orchestrator | 2025-05-13 19:20:43.714085 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-05-13 19:20:43.714099 | orchestrator | 2025-05-13 19:20:43.714110 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-13 19:20:46.196040 | orchestrator | ok: [testbed-manager] 2025-05-13 19:20:46.196127 | orchestrator | 2025-05-13 19:20:46.196144 | orchestrator | TASK [Pull images] ************************************************************* 2025-05-13 19:20:51.365764 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-05-13 19:20:51.365881 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/mariadb:11.7.2) 2025-05-13 19:20:51.365896 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:reef) 2025-05-13 19:20:51.365911 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:latest) 2025-05-13 19:20:51.365922 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:2024.2) 2025-05-13 19:20:51.365933 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/redis:7.4.3-alpine) 2025-05-13 19:20:51.365945 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.2.2) 2025-05-13 19:20:51.365956 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:latest) 2025-05-13 19:20:51.365967 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:latest) 2025-05-13 19:20:51.365977 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/postgres:16.9-alpine) 2025-05-13 19:20:51.365988 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/traefik:v3.4.0) 2025-05-13 19:20:51.365998 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/hashicorp/vault:1.19.3) 2025-05-13 19:20:51.366093 | orchestrator | 2025-05-13 19:20:51.366108 | orchestrator | TASK [Check status] ************************************************************ 2025-05-13 19:22:07.304762 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-13 19:22:07.304918 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-13 19:22:07.304944 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-05-13 19:22:07.304964 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-05-13 19:22:07.305001 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j164651912382.1546', 'results_file': '/home/dragon/.ansible_async/j164651912382.1546', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-05-13 19:22:07.305032 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j264800032226.1571', 'results_file': '/home/dragon/.ansible_async/j264800032226.1571', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/mariadb:11.7.2', 'ansible_loop_var': 'item'}) 2025-05-13 19:22:07.305060 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-13 19:22:07.305080 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-13 19:22:07.305101 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j649260693002.1596', 'results_file': '/home/dragon/.ansible_async/j649260693002.1596', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:reef', 'ansible_loop_var': 'item'}) 2025-05-13 19:22:07.305123 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j482169904957.1627', 'results_file': '/home/dragon/.ansible_async/j482169904957.1627', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:latest', 'ansible_loop_var': 'item'}) 2025-05-13 19:22:07.305157 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j938694002133.1660', 'results_file': '/home/dragon/.ansible_async/j938694002133.1660', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:2024.2', 'ansible_loop_var': 'item'}) 2025-05-13 19:22:07.305178 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j870694295392.1692', 'results_file': '/home/dragon/.ansible_async/j870694295392.1692', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/redis:7.4.3-alpine', 'ansible_loop_var': 'item'}) 2025-05-13 19:22:07.305199 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-13 19:22:07.305219 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j207538858902.1724', 'results_file': '/home/dragon/.ansible_async/j207538858902.1724', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.2.2', 'ansible_loop_var': 'item'}) 2025-05-13 19:22:07.305240 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j647377150878.1764', 'results_file': '/home/dragon/.ansible_async/j647377150878.1764', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:latest', 'ansible_loop_var': 'item'}) 2025-05-13 19:22:07.305264 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j669392569906.1797', 'results_file': '/home/dragon/.ansible_async/j669392569906.1797', 'changed': True, 'item': 'registry.osism.tech/osism/osism:latest', 'ansible_loop_var': 'item'}) 2025-05-13 19:22:07.305374 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j62315752856.1828', 'results_file': '/home/dragon/.ansible_async/j62315752856.1828', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/postgres:16.9-alpine', 'ansible_loop_var': 'item'}) 2025-05-13 19:22:07.305399 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j885807070857.1856', 'results_file': '/home/dragon/.ansible_async/j885807070857.1856', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/traefik:v3.4.0', 'ansible_loop_var': 'item'}) 2025-05-13 19:22:07.305454 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j292163389721.1895', 'results_file': '/home/dragon/.ansible_async/j292163389721.1895', 'changed': True, 'item': 'registry.osism.tech/dockerhub/hashicorp/vault:1.19.3', 'ansible_loop_var': 'item'}) 2025-05-13 19:22:07.305477 | orchestrator | 2025-05-13 19:22:07.305500 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-05-13 19:22:07.358486 | orchestrator | ok: [testbed-manager] 2025-05-13 19:22:07.358598 | orchestrator | 2025-05-13 19:22:07.358615 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-05-13 19:22:07.844091 | orchestrator | changed: [testbed-manager] 2025-05-13 19:22:07.844190 | orchestrator | 2025-05-13 19:22:07.844205 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-05-13 19:22:08.197054 | orchestrator | changed: [testbed-manager] 2025-05-13 19:22:08.197165 | orchestrator | 2025-05-13 19:22:08.197185 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-13 19:22:08.530416 | orchestrator | changed: [testbed-manager] 2025-05-13 19:22:08.530507 | orchestrator | 2025-05-13 19:22:08.530523 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-05-13 19:22:08.588323 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:22:08.588426 | orchestrator | 2025-05-13 19:22:08.588442 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-05-13 19:22:08.924923 | orchestrator | ok: [testbed-manager] 2025-05-13 19:22:08.925036 | orchestrator | 2025-05-13 19:22:08.925063 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-05-13 19:22:09.053974 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:22:09.054176 | orchestrator | 2025-05-13 19:22:09.054206 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-05-13 19:22:09.054228 | orchestrator | 2025-05-13 19:22:09.054248 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-13 19:22:10.875171 | orchestrator | ok: [testbed-manager] 2025-05-13 19:22:10.875340 | orchestrator | 2025-05-13 19:22:10.875362 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-05-13 19:22:10.959012 | orchestrator | included: osism.services.traefik for testbed-manager 2025-05-13 19:22:10.959118 | orchestrator | 2025-05-13 19:22:10.959134 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-05-13 19:22:11.026443 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-05-13 19:22:11.026548 | orchestrator | 2025-05-13 19:22:11.026562 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-05-13 19:22:12.154976 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-05-13 19:22:12.155072 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-05-13 19:22:12.155080 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-05-13 19:22:12.155091 | orchestrator | 2025-05-13 19:22:12.155098 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-05-13 19:22:13.949170 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-05-13 19:22:13.949270 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-05-13 19:22:13.949278 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-05-13 19:22:13.949302 | orchestrator | 2025-05-13 19:22:13.949327 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-05-13 19:22:14.605813 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-13 19:22:14.605926 | orchestrator | changed: [testbed-manager] 2025-05-13 19:22:14.605944 | orchestrator | 2025-05-13 19:22:14.605957 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-05-13 19:22:15.290245 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-13 19:22:15.290399 | orchestrator | changed: [testbed-manager] 2025-05-13 19:22:15.290456 | orchestrator | 2025-05-13 19:22:15.290471 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-05-13 19:22:15.351877 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:22:15.351975 | orchestrator | 2025-05-13 19:22:15.351988 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-05-13 19:22:15.717402 | orchestrator | ok: [testbed-manager] 2025-05-13 19:22:15.717486 | orchestrator | 2025-05-13 19:22:15.717493 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-05-13 19:22:15.784149 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-05-13 19:22:15.784240 | orchestrator | 2025-05-13 19:22:15.784254 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-05-13 19:22:16.837864 | orchestrator | changed: [testbed-manager] 2025-05-13 19:22:16.837958 | orchestrator | 2025-05-13 19:22:16.837971 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-05-13 19:22:17.750879 | orchestrator | changed: [testbed-manager] 2025-05-13 19:22:17.750992 | orchestrator | 2025-05-13 19:22:17.751008 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-05-13 19:22:21.132026 | orchestrator | changed: [testbed-manager] 2025-05-13 19:22:21.132145 | orchestrator | 2025-05-13 19:22:21.132161 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-05-13 19:22:21.291611 | orchestrator | included: osism.services.netbox for testbed-manager 2025-05-13 19:22:21.291722 | orchestrator | 2025-05-13 19:22:21.291737 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-05-13 19:22:21.392504 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-05-13 19:22:21.392618 | orchestrator | 2025-05-13 19:22:21.392634 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-05-13 19:22:23.981406 | orchestrator | ok: [testbed-manager] 2025-05-13 19:22:23.981519 | orchestrator | 2025-05-13 19:22:23.981536 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-13 19:22:24.108291 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-05-13 19:22:24.108502 | orchestrator | 2025-05-13 19:22:24.108529 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-05-13 19:22:25.264361 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-05-13 19:22:25.264473 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-05-13 19:22:25.264488 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-05-13 19:22:25.264500 | orchestrator | 2025-05-13 19:22:25.264513 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-05-13 19:22:25.339951 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-05-13 19:22:25.340047 | orchestrator | 2025-05-13 19:22:25.340061 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-05-13 19:22:25.986523 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-05-13 19:22:25.986636 | orchestrator | 2025-05-13 19:22:25.986651 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-05-13 19:22:26.656256 | orchestrator | changed: [testbed-manager] 2025-05-13 19:22:26.656421 | orchestrator | 2025-05-13 19:22:26.656439 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-13 19:22:27.295182 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-13 19:22:27.295294 | orchestrator | changed: [testbed-manager] 2025-05-13 19:22:27.295339 | orchestrator | 2025-05-13 19:22:27.295353 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-05-13 19:22:27.703461 | orchestrator | changed: [testbed-manager] 2025-05-13 19:22:27.703571 | orchestrator | 2025-05-13 19:22:27.703589 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-05-13 19:22:28.114292 | orchestrator | ok: [testbed-manager] 2025-05-13 19:22:28.114445 | orchestrator | 2025-05-13 19:22:28.114499 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-05-13 19:22:28.151088 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:22:28.151199 | orchestrator | 2025-05-13 19:22:28.151216 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-05-13 19:22:28.809399 | orchestrator | changed: [testbed-manager] 2025-05-13 19:22:28.809510 | orchestrator | 2025-05-13 19:22:28.809536 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-13 19:22:28.883584 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-05-13 19:22:28.883686 | orchestrator | 2025-05-13 19:22:28.883701 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-05-13 19:22:29.684050 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-05-13 19:22:29.684162 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-05-13 19:22:29.684176 | orchestrator | 2025-05-13 19:22:29.684214 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-05-13 19:22:30.328140 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-05-13 19:22:30.328248 | orchestrator | 2025-05-13 19:22:30.328264 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-05-13 19:22:31.001788 | orchestrator | changed: [testbed-manager] 2025-05-13 19:22:31.001901 | orchestrator | 2025-05-13 19:22:31.001918 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-05-13 19:22:31.050405 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:22:31.050514 | orchestrator | 2025-05-13 19:22:31.050530 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-05-13 19:22:31.692989 | orchestrator | changed: [testbed-manager] 2025-05-13 19:22:31.693106 | orchestrator | 2025-05-13 19:22:31.693123 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-13 19:22:33.527479 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-13 19:22:33.527590 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-13 19:22:33.527607 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-13 19:22:33.527620 | orchestrator | changed: [testbed-manager] 2025-05-13 19:22:33.527633 | orchestrator | 2025-05-13 19:22:33.527645 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-05-13 19:22:39.556492 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-05-13 19:22:39.556630 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-05-13 19:22:39.556645 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-05-13 19:22:39.556658 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-05-13 19:22:39.556671 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-05-13 19:22:39.556682 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-05-13 19:22:39.556693 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-05-13 19:22:39.556714 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-05-13 19:22:39.556725 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-05-13 19:22:39.556736 | orchestrator | changed: [testbed-manager] => (item=users) 2025-05-13 19:22:39.556748 | orchestrator | 2025-05-13 19:22:39.556760 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-05-13 19:22:40.223592 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-05-13 19:22:40.223733 | orchestrator | 2025-05-13 19:22:40.223764 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-05-13 19:22:40.312830 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-05-13 19:22:40.312924 | orchestrator | 2025-05-13 19:22:40.312939 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-05-13 19:22:41.019036 | orchestrator | changed: [testbed-manager] 2025-05-13 19:22:41.019165 | orchestrator | 2025-05-13 19:22:41.019183 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-05-13 19:22:41.628581 | orchestrator | ok: [testbed-manager] 2025-05-13 19:22:41.628692 | orchestrator | 2025-05-13 19:22:41.628708 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-05-13 19:22:42.363981 | orchestrator | changed: [testbed-manager] 2025-05-13 19:22:42.364091 | orchestrator | 2025-05-13 19:22:42.364110 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-05-13 19:22:44.702520 | orchestrator | ok: [testbed-manager] 2025-05-13 19:22:44.702629 | orchestrator | 2025-05-13 19:22:44.702645 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-05-13 19:22:45.738953 | orchestrator | ok: [testbed-manager] 2025-05-13 19:22:45.739069 | orchestrator | 2025-05-13 19:22:45.739094 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-05-13 19:23:07.942082 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-05-13 19:23:07.942213 | orchestrator | ok: [testbed-manager] 2025-05-13 19:23:07.942230 | orchestrator | 2025-05-13 19:23:07.942244 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-05-13 19:23:07.992524 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:23:07.992627 | orchestrator | 2025-05-13 19:23:07.992642 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-05-13 19:23:07.992654 | orchestrator | 2025-05-13 19:23:07.992665 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-05-13 19:23:08.032785 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:23:08.032889 | orchestrator | 2025-05-13 19:23:08.032905 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-13 19:23:08.091419 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-05-13 19:23:08.091507 | orchestrator | 2025-05-13 19:23:08.091519 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-05-13 19:23:08.948120 | orchestrator | ok: [testbed-manager] 2025-05-13 19:23:08.948237 | orchestrator | 2025-05-13 19:23:08.948253 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-05-13 19:23:09.024907 | orchestrator | ok: [testbed-manager] 2025-05-13 19:23:09.025001 | orchestrator | 2025-05-13 19:23:09.025013 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-05-13 19:23:09.072155 | orchestrator | ok: [testbed-manager] => { 2025-05-13 19:23:09.072248 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-05-13 19:23:09.072260 | orchestrator | } 2025-05-13 19:23:09.072269 | orchestrator | 2025-05-13 19:23:09.072277 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-05-13 19:23:09.696294 | orchestrator | ok: [testbed-manager] 2025-05-13 19:23:09.696451 | orchestrator | 2025-05-13 19:23:09.696469 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-05-13 19:23:10.594298 | orchestrator | ok: [testbed-manager] 2025-05-13 19:23:10.594460 | orchestrator | 2025-05-13 19:23:10.594478 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-05-13 19:23:10.659305 | orchestrator | ok: [testbed-manager] 2025-05-13 19:23:10.659428 | orchestrator | 2025-05-13 19:23:10.659443 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-05-13 19:23:10.706119 | orchestrator | ok: [testbed-manager] => { 2025-05-13 19:23:10.706216 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-05-13 19:23:10.706231 | orchestrator | } 2025-05-13 19:23:10.706244 | orchestrator | 2025-05-13 19:23:10.706255 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-05-13 19:23:10.759210 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:23:10.759304 | orchestrator | 2025-05-13 19:23:10.759318 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-05-13 19:23:10.818447 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:23:10.818537 | orchestrator | 2025-05-13 19:23:10.818552 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-05-13 19:23:10.867726 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:23:10.867820 | orchestrator | 2025-05-13 19:23:10.867833 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-05-13 19:23:10.922845 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:23:10.922986 | orchestrator | 2025-05-13 19:23:10.923011 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-05-13 19:23:11.063580 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:23:11.063691 | orchestrator | 2025-05-13 19:23:11.063708 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-05-13 19:23:11.122403 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:23:11.122485 | orchestrator | 2025-05-13 19:23:11.122494 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-13 19:23:12.413283 | orchestrator | changed: [testbed-manager] 2025-05-13 19:23:12.413491 | orchestrator | 2025-05-13 19:23:12.413524 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-05-13 19:23:12.479458 | orchestrator | ok: [testbed-manager] 2025-05-13 19:23:12.479565 | orchestrator | 2025-05-13 19:23:12.479595 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-05-13 19:24:12.535211 | orchestrator | Pausing for 60 seconds 2025-05-13 19:24:12.535349 | orchestrator | changed: [testbed-manager] 2025-05-13 19:24:12.535364 | orchestrator | 2025-05-13 19:24:12.535375 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-05-13 19:24:12.595014 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-05-13 19:24:12.595126 | orchestrator | 2025-05-13 19:24:12.595141 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-05-13 19:28:14.077738 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-05-13 19:28:14.077876 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-05-13 19:28:14.077883 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-05-13 19:28:14.077887 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-05-13 19:28:14.077892 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-05-13 19:28:14.077896 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-05-13 19:28:14.077900 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-05-13 19:28:14.077904 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-05-13 19:28:14.077908 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-05-13 19:28:14.077912 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-05-13 19:28:14.077916 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-05-13 19:28:14.077919 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-05-13 19:28:14.077923 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-05-13 19:28:14.077927 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-05-13 19:28:14.077930 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-05-13 19:28:14.077934 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-05-13 19:28:14.077938 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-05-13 19:28:14.077951 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-05-13 19:28:14.077955 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-05-13 19:28:14.077959 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-05-13 19:28:14.077980 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-05-13 19:28:14.077985 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (39 retries left). 2025-05-13 19:28:14.077989 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (38 retries left). 2025-05-13 19:28:14.077993 | orchestrator | changed: [testbed-manager] 2025-05-13 19:28:14.077998 | orchestrator | 2025-05-13 19:28:14.078003 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-05-13 19:28:14.078007 | orchestrator | 2025-05-13 19:28:14.078011 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-13 19:28:16.145050 | orchestrator | ok: [testbed-manager] 2025-05-13 19:28:16.145157 | orchestrator | 2025-05-13 19:28:16.145174 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-05-13 19:28:16.268565 | orchestrator | included: osism.services.manager for testbed-manager 2025-05-13 19:28:16.268658 | orchestrator | 2025-05-13 19:28:16.268670 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-05-13 19:28:16.328400 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-05-13 19:28:16.328501 | orchestrator | 2025-05-13 19:28:16.328515 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-05-13 19:28:18.260291 | orchestrator | ok: [testbed-manager] 2025-05-13 19:28:18.260406 | orchestrator | 2025-05-13 19:28:18.260422 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-05-13 19:28:18.316105 | orchestrator | ok: [testbed-manager] 2025-05-13 19:28:18.316204 | orchestrator | 2025-05-13 19:28:18.316218 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-05-13 19:28:18.410975 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-05-13 19:28:18.411083 | orchestrator | 2025-05-13 19:28:18.411099 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-05-13 19:28:21.324227 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-05-13 19:28:21.324366 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-05-13 19:28:21.324391 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-05-13 19:28:21.324411 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-05-13 19:28:21.324431 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-05-13 19:28:21.324460 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-05-13 19:28:21.324479 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-05-13 19:28:21.324499 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-05-13 19:28:21.324518 | orchestrator | 2025-05-13 19:28:21.324540 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-05-13 19:28:22.007139 | orchestrator | changed: [testbed-manager] 2025-05-13 19:28:22.007252 | orchestrator | 2025-05-13 19:28:22.007269 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-05-13 19:28:22.104650 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-05-13 19:28:22.104721 | orchestrator | 2025-05-13 19:28:22.104728 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-05-13 19:28:23.383289 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-05-13 19:28:23.383374 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-05-13 19:28:23.383383 | orchestrator | 2025-05-13 19:28:23.383390 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-05-13 19:28:24.044498 | orchestrator | changed: [testbed-manager] 2025-05-13 19:28:24.044599 | orchestrator | 2025-05-13 19:28:24.044608 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-05-13 19:28:24.094086 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:28:24.094183 | orchestrator | 2025-05-13 19:28:24.094216 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-05-13 19:28:24.168860 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-05-13 19:28:24.168969 | orchestrator | 2025-05-13 19:28:24.168985 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-05-13 19:28:25.718409 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-13 19:28:25.718524 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-13 19:28:25.718542 | orchestrator | changed: [testbed-manager] 2025-05-13 19:28:25.718556 | orchestrator | 2025-05-13 19:28:25.718569 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-05-13 19:28:26.422011 | orchestrator | changed: [testbed-manager] 2025-05-13 19:28:26.422178 | orchestrator | 2025-05-13 19:28:26.422195 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-05-13 19:28:26.516737 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-05-13 19:28:26.516903 | orchestrator | 2025-05-13 19:28:26.516920 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-05-13 19:28:27.800442 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-13 19:28:27.800564 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-13 19:28:27.800580 | orchestrator | changed: [testbed-manager] 2025-05-13 19:28:27.800609 | orchestrator | 2025-05-13 19:28:27.800659 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-05-13 19:28:28.495016 | orchestrator | changed: [testbed-manager] 2025-05-13 19:28:28.495134 | orchestrator | 2025-05-13 19:28:28.495152 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-05-13 19:28:28.598956 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-05-13 19:28:28.599064 | orchestrator | 2025-05-13 19:28:28.599080 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-05-13 19:28:29.214184 | orchestrator | changed: [testbed-manager] 2025-05-13 19:28:29.214294 | orchestrator | 2025-05-13 19:28:29.214309 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-05-13 19:28:29.658673 | orchestrator | changed: [testbed-manager] 2025-05-13 19:28:29.658774 | orchestrator | 2025-05-13 19:28:29.658788 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-05-13 19:28:31.054584 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-05-13 19:28:31.054694 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-05-13 19:28:31.054710 | orchestrator | 2025-05-13 19:28:31.054724 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-05-13 19:28:31.721386 | orchestrator | changed: [testbed-manager] 2025-05-13 19:28:31.721503 | orchestrator | 2025-05-13 19:28:31.721522 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-05-13 19:28:32.136895 | orchestrator | ok: [testbed-manager] 2025-05-13 19:28:32.137000 | orchestrator | 2025-05-13 19:28:32.137017 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-05-13 19:28:32.526936 | orchestrator | changed: [testbed-manager] 2025-05-13 19:28:32.527035 | orchestrator | 2025-05-13 19:28:32.527047 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-05-13 19:28:32.580443 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:28:32.580540 | orchestrator | 2025-05-13 19:28:32.580555 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-05-13 19:28:32.665116 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-05-13 19:28:32.665215 | orchestrator | 2025-05-13 19:28:32.665230 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-05-13 19:28:32.711223 | orchestrator | ok: [testbed-manager] 2025-05-13 19:28:32.711320 | orchestrator | 2025-05-13 19:28:32.711335 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-05-13 19:28:34.869571 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-05-13 19:28:34.869712 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-05-13 19:28:34.869729 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-05-13 19:28:34.869741 | orchestrator | 2025-05-13 19:28:34.869754 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-05-13 19:28:35.612300 | orchestrator | changed: [testbed-manager] 2025-05-13 19:28:35.612378 | orchestrator | 2025-05-13 19:28:35.612386 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-05-13 19:28:36.379286 | orchestrator | changed: [testbed-manager] 2025-05-13 19:28:36.379396 | orchestrator | 2025-05-13 19:28:36.379411 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-05-13 19:28:37.108094 | orchestrator | changed: [testbed-manager] 2025-05-13 19:28:37.108193 | orchestrator | 2025-05-13 19:28:37.108204 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-05-13 19:28:37.191650 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-05-13 19:28:37.191759 | orchestrator | 2025-05-13 19:28:37.191800 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-05-13 19:28:37.246984 | orchestrator | ok: [testbed-manager] 2025-05-13 19:28:37.247097 | orchestrator | 2025-05-13 19:28:37.247112 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-05-13 19:28:38.032162 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-05-13 19:28:38.032285 | orchestrator | 2025-05-13 19:28:38.032338 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-05-13 19:28:38.128055 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-05-13 19:28:38.128155 | orchestrator | 2025-05-13 19:28:38.128171 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-05-13 19:28:38.880580 | orchestrator | changed: [testbed-manager] 2025-05-13 19:28:38.880709 | orchestrator | 2025-05-13 19:28:38.880727 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-05-13 19:28:39.545584 | orchestrator | ok: [testbed-manager] 2025-05-13 19:28:39.545692 | orchestrator | 2025-05-13 19:28:39.545707 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-05-13 19:28:39.612282 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:28:39.612387 | orchestrator | 2025-05-13 19:28:39.612403 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-05-13 19:28:39.676627 | orchestrator | ok: [testbed-manager] 2025-05-13 19:28:39.676726 | orchestrator | 2025-05-13 19:28:39.676741 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-05-13 19:28:40.616977 | orchestrator | changed: [testbed-manager] 2025-05-13 19:28:40.617082 | orchestrator | 2025-05-13 19:28:40.617099 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-05-13 19:29:24.623764 | orchestrator | changed: [testbed-manager] 2025-05-13 19:29:24.623942 | orchestrator | 2025-05-13 19:29:24.623962 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-05-13 19:29:25.308978 | orchestrator | ok: [testbed-manager] 2025-05-13 19:29:25.309089 | orchestrator | 2025-05-13 19:29:25.309105 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-05-13 19:29:28.194618 | orchestrator | changed: [testbed-manager] 2025-05-13 19:29:28.194726 | orchestrator | 2025-05-13 19:29:28.194741 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-05-13 19:29:28.265639 | orchestrator | ok: [testbed-manager] 2025-05-13 19:29:28.265743 | orchestrator | 2025-05-13 19:29:28.265758 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-13 19:29:28.265770 | orchestrator | 2025-05-13 19:29:28.265782 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-05-13 19:29:28.314757 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:29:28.314859 | orchestrator | 2025-05-13 19:29:28.314904 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-05-13 19:30:28.368413 | orchestrator | Pausing for 60 seconds 2025-05-13 19:30:28.368566 | orchestrator | changed: [testbed-manager] 2025-05-13 19:30:28.368584 | orchestrator | 2025-05-13 19:30:28.368598 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-05-13 19:30:32.810150 | orchestrator | changed: [testbed-manager] 2025-05-13 19:30:32.810258 | orchestrator | 2025-05-13 19:30:32.810268 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-05-13 19:31:14.532337 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-05-13 19:31:14.533330 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-05-13 19:31:14.533380 | orchestrator | changed: [testbed-manager] 2025-05-13 19:31:14.533395 | orchestrator | 2025-05-13 19:31:14.533407 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-05-13 19:31:23.662563 | orchestrator | changed: [testbed-manager] 2025-05-13 19:31:23.662718 | orchestrator | 2025-05-13 19:31:23.662743 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-05-13 19:31:23.780901 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-05-13 19:31:23.781055 | orchestrator | 2025-05-13 19:31:23.781069 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-13 19:31:23.781079 | orchestrator | 2025-05-13 19:31:23.781087 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-05-13 19:31:23.843527 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:31:23.843635 | orchestrator | 2025-05-13 19:31:23.843651 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:31:23.843664 | orchestrator | testbed-manager : ok=109 changed=57 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-05-13 19:31:23.843675 | orchestrator | 2025-05-13 19:31:23.957302 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-13 19:31:23.957389 | orchestrator | + deactivate 2025-05-13 19:31:23.957401 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-13 19:31:23.957411 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-13 19:31:23.957419 | orchestrator | + export PATH 2025-05-13 19:31:23.957427 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-13 19:31:23.957435 | orchestrator | + '[' -n '' ']' 2025-05-13 19:31:23.957443 | orchestrator | + hash -r 2025-05-13 19:31:23.957450 | orchestrator | + '[' -n '' ']' 2025-05-13 19:31:23.957458 | orchestrator | + unset VIRTUAL_ENV 2025-05-13 19:31:23.957465 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-13 19:31:23.957473 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-13 19:31:23.957481 | orchestrator | + unset -f deactivate 2025-05-13 19:31:23.957489 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-05-13 19:31:23.963121 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-13 19:31:23.963137 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-13 19:31:23.963145 | orchestrator | + local max_attempts=60 2025-05-13 19:31:23.963153 | orchestrator | + local name=ceph-ansible 2025-05-13 19:31:23.963160 | orchestrator | + local attempt_num=1 2025-05-13 19:31:23.964429 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-13 19:31:24.001468 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-13 19:31:24.001584 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-13 19:31:24.001606 | orchestrator | + local max_attempts=60 2025-05-13 19:31:24.001621 | orchestrator | + local name=kolla-ansible 2025-05-13 19:31:24.001637 | orchestrator | + local attempt_num=1 2025-05-13 19:31:24.001897 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-13 19:31:24.035575 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-13 19:31:24.035679 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-13 19:31:24.035695 | orchestrator | + local max_attempts=60 2025-05-13 19:31:24.035833 | orchestrator | + local name=osism-ansible 2025-05-13 19:31:24.035846 | orchestrator | + local attempt_num=1 2025-05-13 19:31:24.035870 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-13 19:31:24.064448 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-13 19:31:24.064585 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-13 19:31:24.064649 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-13 19:31:24.771985 | orchestrator | ++ semver latest 9.0.0 2025-05-13 19:31:24.824498 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-13 19:31:24.824623 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-13 19:31:24.824647 | orchestrator | + wait_for_container_healthy 60 netbox-netbox-1 2025-05-13 19:31:24.824669 | orchestrator | + local max_attempts=60 2025-05-13 19:31:24.824688 | orchestrator | + local name=netbox-netbox-1 2025-05-13 19:31:24.824705 | orchestrator | + local attempt_num=1 2025-05-13 19:31:24.825071 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' netbox-netbox-1 2025-05-13 19:31:24.860983 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-13 19:31:24.861145 | orchestrator | + /opt/configuration/scripts/bootstrap/000-netbox.sh 2025-05-13 19:31:24.868680 | orchestrator | + set -e 2025-05-13 19:31:24.868712 | orchestrator | + osism manage netbox --parallel 4 2025-05-13 19:31:26.782717 | orchestrator | 2025-05-13 19:31:26 | INFO  | It takes a moment until task 6de58f26-3e09-47b7-a682-1eb6dc6b7c22 (netbox-manager) has been started and output is visible here. 2025-05-13 19:31:29.271669 | orchestrator | 2025-05-13 19:31:29 | INFO  | Wait for NetBox service 2025-05-13 19:31:31.301828 | orchestrator | 2025-05-13 19:31:31.301929 | orchestrator | PLAY [Wait for NetBox service] ************************************************* 2025-05-13 19:31:31.407694 | orchestrator | 2025-05-13 19:31:31.410121 | orchestrator | TASK [Wait for NetBox service REST API] **************************************** 2025-05-13 19:31:32.567189 | orchestrator | ok: [localhost] 2025-05-13 19:31:32.571627 | orchestrator | 2025-05-13 19:31:32.572341 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:31:32.572840 | orchestrator | 2025-05-13 19:31:32 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:31:32.573212 | orchestrator | 2025-05-13 19:31:32 | INFO  | Please wait and do not abort execution. 2025-05-13 19:31:32.574411 | orchestrator | localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:31:33.172429 | orchestrator | 2025-05-13 19:31:33 | INFO  | Manage devicetypes 2025-05-13 19:31:36.242664 | orchestrator | 2025-05-13 19:31:36 | INFO  | Manage moduletypes 2025-05-13 19:31:36.435783 | orchestrator | 2025-05-13 19:31:36 | INFO  | Manage resources 2025-05-13 19:31:36.448675 | orchestrator | 2025-05-13 19:31:36 | INFO  | Handle file /netbox/resources/100-initialise.yml 2025-05-13 19:31:37.529562 | orchestrator | IGNORE_SSL_ERRORS is True, catching exception and disabling SSL verification. 2025-05-13 19:31:37.529672 | orchestrator | Manufacturer queued for addition: Arista 2025-05-13 19:31:37.530704 | orchestrator | Manufacturer queued for addition: Other 2025-05-13 19:31:37.532138 | orchestrator | Manufacturer Created: Arista - 2 2025-05-13 19:31:37.532831 | orchestrator | Manufacturer Created: Other - 3 2025-05-13 19:31:37.534292 | orchestrator | Device Type Created: Arista - DCS-7050TX3-48C8 - 2 2025-05-13 19:31:37.535168 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 2 - 1 2025-05-13 19:31:37.535756 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 2 - 2 2025-05-13 19:31:37.536533 | orchestrator | Interface Template Created: Ethernet3 - 10GBASE-T (10GE) - 2 - 3 2025-05-13 19:31:37.537619 | orchestrator | Interface Template Created: Ethernet4 - 10GBASE-T (10GE) - 2 - 4 2025-05-13 19:31:37.538132 | orchestrator | Interface Template Created: Ethernet5 - 10GBASE-T (10GE) - 2 - 5 2025-05-13 19:31:37.539952 | orchestrator | Interface Template Created: Ethernet6 - 10GBASE-T (10GE) - 2 - 6 2025-05-13 19:31:37.540404 | orchestrator | Interface Template Created: Ethernet7 - 10GBASE-T (10GE) - 2 - 7 2025-05-13 19:31:37.541298 | orchestrator | Interface Template Created: Ethernet8 - 10GBASE-T (10GE) - 2 - 8 2025-05-13 19:31:37.542226 | orchestrator | Interface Template Created: Ethernet9 - 10GBASE-T (10GE) - 2 - 9 2025-05-13 19:31:37.542767 | orchestrator | Interface Template Created: Ethernet10 - 10GBASE-T (10GE) - 2 - 10 2025-05-13 19:31:37.543490 | orchestrator | Interface Template Created: Ethernet11 - 10GBASE-T (10GE) - 2 - 11 2025-05-13 19:31:37.543952 | orchestrator | Interface Template Created: Ethernet12 - 10GBASE-T (10GE) - 2 - 12 2025-05-13 19:31:37.545296 | orchestrator | Interface Template Created: Ethernet13 - 10GBASE-T (10GE) - 2 - 13 2025-05-13 19:31:37.545976 | orchestrator | Interface Template Created: Ethernet14 - 10GBASE-T (10GE) - 2 - 14 2025-05-13 19:31:37.547580 | orchestrator | Interface Template Created: Ethernet15 - 10GBASE-T (10GE) - 2 - 15 2025-05-13 19:31:37.548256 | orchestrator | Interface Template Created: Ethernet16 - 10GBASE-T (10GE) - 2 - 16 2025-05-13 19:31:37.549152 | orchestrator | Interface Template Created: Ethernet17 - 10GBASE-T (10GE) - 2 - 17 2025-05-13 19:31:37.550662 | orchestrator | Interface Template Created: Ethernet18 - 10GBASE-T (10GE) - 2 - 18 2025-05-13 19:31:37.552219 | orchestrator | Interface Template Created: Ethernet19 - 10GBASE-T (10GE) - 2 - 19 2025-05-13 19:31:37.552958 | orchestrator | Interface Template Created: Ethernet20 - 10GBASE-T (10GE) - 2 - 20 2025-05-13 19:31:37.554213 | orchestrator | Interface Template Created: Ethernet21 - 10GBASE-T (10GE) - 2 - 21 2025-05-13 19:31:37.554901 | orchestrator | Interface Template Created: Ethernet22 - 10GBASE-T (10GE) - 2 - 22 2025-05-13 19:31:37.555917 | orchestrator | Interface Template Created: Ethernet23 - 10GBASE-T (10GE) - 2 - 23 2025-05-13 19:31:37.556966 | orchestrator | Interface Template Created: Ethernet24 - 10GBASE-T (10GE) - 2 - 24 2025-05-13 19:31:37.557560 | orchestrator | Interface Template Created: Ethernet25 - 10GBASE-T (10GE) - 2 - 25 2025-05-13 19:31:37.558696 | orchestrator | Interface Template Created: Ethernet26 - 10GBASE-T (10GE) - 2 - 26 2025-05-13 19:31:37.559650 | orchestrator | Interface Template Created: Ethernet27 - 10GBASE-T (10GE) - 2 - 27 2025-05-13 19:31:37.560183 | orchestrator | Interface Template Created: Ethernet28 - 10GBASE-T (10GE) - 2 - 28 2025-05-13 19:31:37.561241 | orchestrator | Interface Template Created: Ethernet29 - 10GBASE-T (10GE) - 2 - 29 2025-05-13 19:31:37.561998 | orchestrator | Interface Template Created: Ethernet30 - 10GBASE-T (10GE) - 2 - 30 2025-05-13 19:31:37.562623 | orchestrator | Interface Template Created: Ethernet31 - 10GBASE-T (10GE) - 2 - 31 2025-05-13 19:31:37.563981 | orchestrator | Interface Template Created: Ethernet32 - 10GBASE-T (10GE) - 2 - 32 2025-05-13 19:31:37.564978 | orchestrator | Interface Template Created: Ethernet33 - 10GBASE-T (10GE) - 2 - 33 2025-05-13 19:31:37.565670 | orchestrator | Interface Template Created: Ethernet34 - 10GBASE-T (10GE) - 2 - 34 2025-05-13 19:31:37.566642 | orchestrator | Interface Template Created: Ethernet35 - 10GBASE-T (10GE) - 2 - 35 2025-05-13 19:31:37.567292 | orchestrator | Interface Template Created: Ethernet36 - 10GBASE-T (10GE) - 2 - 36 2025-05-13 19:31:37.567816 | orchestrator | Interface Template Created: Ethernet37 - 10GBASE-T (10GE) - 2 - 37 2025-05-13 19:31:37.568560 | orchestrator | Interface Template Created: Ethernet38 - 10GBASE-T (10GE) - 2 - 38 2025-05-13 19:31:37.569349 | orchestrator | Interface Template Created: Ethernet39 - 10GBASE-T (10GE) - 2 - 39 2025-05-13 19:31:37.569900 | orchestrator | Interface Template Created: Ethernet40 - 10GBASE-T (10GE) - 2 - 40 2025-05-13 19:31:37.570875 | orchestrator | Interface Template Created: Ethernet41 - 10GBASE-T (10GE) - 2 - 41 2025-05-13 19:31:37.571399 | orchestrator | Interface Template Created: Ethernet42 - 10GBASE-T (10GE) - 2 - 42 2025-05-13 19:31:37.572219 | orchestrator | Interface Template Created: Ethernet43 - 10GBASE-T (10GE) - 2 - 43 2025-05-13 19:31:37.572647 | orchestrator | Interface Template Created: Ethernet44 - 10GBASE-T (10GE) - 2 - 44 2025-05-13 19:31:37.573416 | orchestrator | Interface Template Created: Ethernet45 - 10GBASE-T (10GE) - 2 - 45 2025-05-13 19:31:37.574448 | orchestrator | Interface Template Created: Ethernet46 - 10GBASE-T (10GE) - 2 - 46 2025-05-13 19:31:37.574904 | orchestrator | Interface Template Created: Ethernet47 - 10GBASE-T (10GE) - 2 - 47 2025-05-13 19:31:37.575785 | orchestrator | Interface Template Created: Ethernet48 - 10GBASE-T (10GE) - 2 - 48 2025-05-13 19:31:37.577153 | orchestrator | Interface Template Created: Ethernet49/1 - QSFP28 (100GE) - 2 - 49 2025-05-13 19:31:37.577340 | orchestrator | Interface Template Created: Ethernet50/1 - QSFP28 (100GE) - 2 - 50 2025-05-13 19:31:37.577999 | orchestrator | Interface Template Created: Ethernet51/1 - QSFP28 (100GE) - 2 - 51 2025-05-13 19:31:37.579342 | orchestrator | Interface Template Created: Ethernet52/1 - QSFP28 (100GE) - 2 - 52 2025-05-13 19:31:37.581772 | orchestrator | Interface Template Created: Ethernet53/1 - QSFP28 (100GE) - 2 - 53 2025-05-13 19:31:37.582688 | orchestrator | Interface Template Created: Ethernet54/1 - QSFP28 (100GE) - 2 - 54 2025-05-13 19:31:37.582771 | orchestrator | Interface Template Created: Ethernet55/1 - QSFP28 (100GE) - 2 - 55 2025-05-13 19:31:37.583134 | orchestrator | Interface Template Created: Ethernet56/1 - QSFP28 (100GE) - 2 - 56 2025-05-13 19:31:37.583477 | orchestrator | Interface Template Created: Management1 - 1000BASE-T (1GE) - 2 - 57 2025-05-13 19:31:37.584192 | orchestrator | Power Port Template Created: PS1 - C14 - 2 - 1 2025-05-13 19:31:37.584899 | orchestrator | Power Port Template Created: PS2 - C14 - 2 - 2 2025-05-13 19:31:37.584926 | orchestrator | Console Port Template Created: Console - RJ-45 - 2 - 1 2025-05-13 19:31:37.585588 | orchestrator | Device Type Created: Other - Baremetal-Device - 3 2025-05-13 19:31:37.587107 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 3 - 58 2025-05-13 19:31:37.587624 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 3 - 59 2025-05-13 19:31:37.588518 | orchestrator | Power Port Template Created: PS1 - C14 - 3 - 3 2025-05-13 19:31:37.588991 | orchestrator | Device Type Created: Other - Manager - 4 2025-05-13 19:31:37.589793 | orchestrator | Interface Template Created: Ethernet0 - 1000BASE-T (1GE) - 4 - 60 2025-05-13 19:31:37.590564 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 4 - 61 2025-05-13 19:31:37.590867 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 4 - 62 2025-05-13 19:31:37.591766 | orchestrator | Interface Template Created: Ethernet3 - 10GBASE-T (10GE) - 4 - 63 2025-05-13 19:31:37.592492 | orchestrator | Power Port Template Created: PS1 - C14 - 4 - 4 2025-05-13 19:31:37.592654 | orchestrator | Device Type Created: Other - Node - 5 2025-05-13 19:31:37.593200 | orchestrator | Interface Template Created: Ethernet0 - 1000BASE-T (1GE) - 5 - 64 2025-05-13 19:31:37.593904 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 5 - 65 2025-05-13 19:31:37.594129 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 5 - 66 2025-05-13 19:31:37.594769 | orchestrator | Interface Template Created: Ethernet3 - 10GBASE-T (10GE) - 5 - 67 2025-05-13 19:31:37.595500 | orchestrator | Power Port Template Created: PS1 - C14 - 5 - 5 2025-05-13 19:31:37.595924 | orchestrator | Device Type Created: Other - Baremetal-Housing - 6 2025-05-13 19:31:37.596524 | orchestrator | Interface Template Created: Ethernet0 - 1000BASE-T (1GE) - 6 - 68 2025-05-13 19:31:37.597458 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 6 - 69 2025-05-13 19:31:37.597951 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 6 - 70 2025-05-13 19:31:37.598684 | orchestrator | Interface Template Created: Ethernet3 - 10GBASE-T (10GE) - 6 - 71 2025-05-13 19:31:37.599274 | orchestrator | Power Port Template Created: PS1 - C14 - 6 - 6 2025-05-13 19:31:37.599979 | orchestrator | Manufacturer queued for addition: .gitkeep 2025-05-13 19:31:37.600975 | orchestrator | Manufacturer Created: .gitkeep - 4 2025-05-13 19:31:37.601801 | orchestrator | 2025-05-13 19:31:37.602271 | orchestrator | PLAY [Manage NetBox resources defined in 100-initialise.yml] ******************* 2025-05-13 19:31:37.603344 | orchestrator | 2025-05-13 19:31:37.603656 | orchestrator | TASK [Manage NetBox resource Testbed of type tenant] *************************** 2025-05-13 19:31:38.879136 | orchestrator | changed: [localhost] 2025-05-13 19:31:38.886520 | orchestrator | 2025-05-13 19:31:38.888726 | orchestrator | TASK [Manage NetBox resource Discworld of type site] *************************** 2025-05-13 19:31:40.110985 | orchestrator | changed: [localhost] 2025-05-13 19:31:40.114327 | orchestrator | 2025-05-13 19:31:40.114901 | orchestrator | TASK [Manage NetBox resource Ankh-Morpork of type location] ******************** 2025-05-13 19:31:41.403560 | orchestrator | changed: [localhost] 2025-05-13 19:31:41.404847 | orchestrator | 2025-05-13 19:31:41.404925 | orchestrator | TASK [Manage NetBox resource OOB Testbed of type vlan] ************************* 2025-05-13 19:31:42.935969 | orchestrator | changed: [localhost] 2025-05-13 19:31:42.938951 | orchestrator | 2025-05-13 19:31:42.941488 | orchestrator | TASK [Manage NetBox resource of type prefix] *********************************** 2025-05-13 19:31:44.627588 | orchestrator | changed: [localhost] 2025-05-13 19:31:44.627691 | orchestrator | 2025-05-13 19:31:44.628988 | orchestrator | TASK [Manage NetBox resource of type prefix] *********************************** 2025-05-13 19:31:45.887574 | orchestrator | changed: [localhost] 2025-05-13 19:31:45.891949 | orchestrator | 2025-05-13 19:31:45.892430 | orchestrator | TASK [Manage NetBox resource of type prefix] *********************************** 2025-05-13 19:31:53.134296 | orchestrator | changed: [localhost] 2025-05-13 19:31:53.139553 | orchestrator | 2025-05-13 19:31:53.139795 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:31:54.462805 | orchestrator | changed: [localhost] 2025-05-13 19:31:54.468225 | orchestrator | 2025-05-13 19:31:54.469430 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:31:55.628894 | orchestrator | changed: [localhost] 2025-05-13 19:31:55.629343 | orchestrator | 2025-05-13 19:31:55.629785 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:31:55.630493 | orchestrator | 2025-05-13 19:31:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:31:55.630521 | orchestrator | 2025-05-13 19:31:55 | INFO  | Please wait and do not abort execution. 2025-05-13 19:31:55.631513 | orchestrator | localhost : ok=9 changed=9 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:31:55.865426 | orchestrator | 2025-05-13 19:31:55 | INFO  | Handle file /netbox/resources/200-rack-1000.yml 2025-05-13 19:31:56.966472 | orchestrator | 2025-05-13 19:31:56.968692 | orchestrator | PLAY [Manage NetBox resources defined in 200-rack-1000.yml] ******************** 2025-05-13 19:31:57.020440 | orchestrator | 2025-05-13 19:31:57.020544 | orchestrator | TASK [Manage NetBox resource 1000 of type rack] ******************************** 2025-05-13 19:31:58.516711 | orchestrator | changed: [localhost] 2025-05-13 19:31:58.521223 | orchestrator | 2025-05-13 19:31:58.523157 | orchestrator | TASK [Manage NetBox resource testbed-switch-0 of type device] ****************** 2025-05-13 19:32:05.169192 | orchestrator | changed: [localhost] 2025-05-13 19:32:05.170508 | orchestrator | 2025-05-13 19:32:05.173075 | orchestrator | TASK [Manage NetBox resource testbed-switch-1 of type device] ****************** 2025-05-13 19:32:11.950995 | orchestrator | changed: [localhost] 2025-05-13 19:32:11.951478 | orchestrator | 2025-05-13 19:32:11.952243 | orchestrator | TASK [Manage NetBox resource testbed-switch-2 of type device] ****************** 2025-05-13 19:32:18.081542 | orchestrator | changed: [localhost] 2025-05-13 19:32:18.083689 | orchestrator | 2025-05-13 19:32:18.083997 | orchestrator | TASK [Manage NetBox resource testbed-switch-oob of type device] **************** 2025-05-13 19:32:24.127855 | orchestrator | changed: [localhost] 2025-05-13 19:32:24.130859 | orchestrator | 2025-05-13 19:32:24.133078 | orchestrator | TASK [Manage NetBox resource testbed-manager of type device] ******************* 2025-05-13 19:32:26.730333 | orchestrator | changed: [localhost] 2025-05-13 19:32:26.737260 | orchestrator | 2025-05-13 19:32:26.737304 | orchestrator | TASK [Manage NetBox resource testbed-node-0 of type device] ******************** 2025-05-13 19:32:29.266575 | orchestrator | changed: [localhost] 2025-05-13 19:32:29.270818 | orchestrator | 2025-05-13 19:32:29.270905 | orchestrator | TASK [Manage NetBox resource testbed-node-1 of type device] ******************** 2025-05-13 19:32:31.777581 | orchestrator | changed: [localhost] 2025-05-13 19:32:31.778636 | orchestrator | 2025-05-13 19:32:31.778929 | orchestrator | TASK [Manage NetBox resource testbed-node-2 of type device] ******************** 2025-05-13 19:32:34.612159 | orchestrator | changed: [localhost] 2025-05-13 19:32:34.614292 | orchestrator | 2025-05-13 19:32:34.614682 | orchestrator | TASK [Manage NetBox resource testbed-node-3 of type device] ******************** 2025-05-13 19:32:37.289894 | orchestrator | changed: [localhost] 2025-05-13 19:32:37.294230 | orchestrator | 2025-05-13 19:32:37.294280 | orchestrator | TASK [Manage NetBox resource testbed-node-4 of type device] ******************** 2025-05-13 19:32:40.072852 | orchestrator | changed: [localhost] 2025-05-13 19:32:40.077125 | orchestrator | 2025-05-13 19:32:40.077958 | orchestrator | TASK [Manage NetBox resource testbed-node-5 of type device] ******************** 2025-05-13 19:32:42.394892 | orchestrator | changed: [localhost] 2025-05-13 19:32:42.396136 | orchestrator | 2025-05-13 19:32:42.396686 | orchestrator | TASK [Manage NetBox resource testbed-node-6 of type device] ******************** 2025-05-13 19:32:44.646269 | orchestrator | changed: [localhost] 2025-05-13 19:32:44.649380 | orchestrator | 2025-05-13 19:32:44.649754 | orchestrator | TASK [Manage NetBox resource testbed-node-7 of type device] ******************** 2025-05-13 19:32:46.939534 | orchestrator | changed: [localhost] 2025-05-13 19:32:46.941367 | orchestrator | 2025-05-13 19:32:46.942547 | orchestrator | TASK [Manage NetBox resource testbed-node-8 of type device] ******************** 2025-05-13 19:32:49.342145 | orchestrator | changed: [localhost] 2025-05-13 19:32:49.351699 | orchestrator | 2025-05-13 19:32:49.351780 | orchestrator | TASK [Manage NetBox resource testbed-node-9 of type device] ******************** 2025-05-13 19:32:51.695903 | orchestrator | changed: [localhost] 2025-05-13 19:32:51.697813 | orchestrator | 2025-05-13 19:32:51.698668 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:32:51.699134 | orchestrator | 2025-05-13 19:32:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:32:51.699614 | orchestrator | 2025-05-13 19:32:51 | INFO  | Please wait and do not abort execution. 2025-05-13 19:32:51.700686 | orchestrator | localhost : ok=16 changed=16 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:32:51.940308 | orchestrator | 2025-05-13 19:32:51 | INFO  | Handle file /netbox/resources/300-testbed-switch-0.yml 2025-05-13 19:32:51.953983 | orchestrator | 2025-05-13 19:32:51 | INFO  | Handle file /netbox/resources/300-testbed-node-9.yml 2025-05-13 19:32:51.960858 | orchestrator | 2025-05-13 19:32:51 | INFO  | Handle file /netbox/resources/300-testbed-node-1.yml 2025-05-13 19:32:51.964845 | orchestrator | 2025-05-13 19:32:51 | INFO  | Handle file /netbox/resources/300-testbed-node-3.yml 2025-05-13 19:32:53.127777 | orchestrator | 2025-05-13 19:32:53.129791 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-switch-0.yml] ************* 2025-05-13 19:32:53.151557 | orchestrator | 2025-05-13 19:32:53.151732 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-1.yml] *************** 2025-05-13 19:32:53.173372 | orchestrator | 2025-05-13 19:32:53.173607 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-9.yml] *************** 2025-05-13 19:32:53.174350 | orchestrator | 2025-05-13 19:32:53.176091 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:32:53.194573 | orchestrator | 2025-05-13 19:32:53.194722 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-3.yml] *************** 2025-05-13 19:32:53.212664 | orchestrator | 2025-05-13 19:32:53.212742 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:32:53.232671 | orchestrator | 2025-05-13 19:32:53.232742 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:32:53.250432 | orchestrator | 2025-05-13 19:32:53.251933 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:32:55.574924 | orchestrator | changed: [localhost] 2025-05-13 19:32:55.579360 | orchestrator | 2025-05-13 19:32:55.579694 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:32:55.700781 | orchestrator | changed: [localhost] 2025-05-13 19:32:55.708394 | orchestrator | 2025-05-13 19:32:55.708566 | orchestrator | TASK [Manage NetBox resource Management1 of type device_interface] ************* 2025-05-13 19:32:55.880392 | orchestrator | changed: [localhost] 2025-05-13 19:32:55.886834 | orchestrator | 2025-05-13 19:32:55.886976 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:32:56.486352 | orchestrator | changed: [localhost] 2025-05-13 19:32:56.490971 | orchestrator | 2025-05-13 19:32:56.491535 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:32:58.419218 | orchestrator | changed: [localhost] 2025-05-13 19:32:58.422766 | orchestrator | 2025-05-13 19:32:58.424460 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:32:58.508137 | orchestrator | changed: [localhost] 2025-05-13 19:32:58.515393 | orchestrator | 2025-05-13 19:32:58.515460 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:32:58.837978 | orchestrator | changed: [localhost] 2025-05-13 19:32:58.839289 | orchestrator | 2025-05-13 19:32:58.839831 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:32:59.802313 | orchestrator | changed: [localhost] 2025-05-13 19:32:59.804464 | orchestrator | 2025-05-13 19:32:59.805108 | orchestrator | TASK [Manage NetBox resource testbed-switch-0 of type device] ****************** 2025-05-13 19:33:00.164195 | orchestrator | changed: [localhost] 2025-05-13 19:33:00.167654 | orchestrator | 2025-05-13 19:33:00.167733 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:00.445595 | orchestrator | changed: [localhost] 2025-05-13 19:33:00.448307 | orchestrator | 2025-05-13 19:33:00.449162 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:01.274443 | orchestrator | changed: [localhost] 2025-05-13 19:33:01.276339 | orchestrator | 2025-05-13 19:33:01.276379 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 19:33:02.154369 | orchestrator | changed: [localhost] 2025-05-13 19:33:02.155420 | orchestrator | 2025-05-13 19:33:02.155638 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:33:02.334791 | orchestrator | changed: [localhost] 2025-05-13 19:33:02.335868 | orchestrator | 2025-05-13 19:33:02.337706 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:33:02.899339 | orchestrator | changed: [localhost] 2025-05-13 19:33:02.900567 | orchestrator | 2025-05-13 19:33:02.900619 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:33:02.900763 | orchestrator | 2025-05-13 19:33:02 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:33:02.900835 | orchestrator | 2025-05-13 19:33:02 | INFO  | Please wait and do not abort execution. 2025-05-13 19:33:02.902457 | orchestrator | localhost : ok=5 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:33:03.105112 | orchestrator | 2025-05-13 19:33:03 | INFO  | Handle file /netbox/resources/300-testbed-node-6.yml 2025-05-13 19:33:04.049491 | orchestrator | changed: [localhost] 2025-05-13 19:33:04.055604 | orchestrator | 2025-05-13 19:33:04.057964 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:33:04.132295 | orchestrator | changed: [localhost] 2025-05-13 19:33:04.136003 | orchestrator | 2025-05-13 19:33:04.136048 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:33:04.174252 | orchestrator | changed: [localhost] 2025-05-13 19:33:04.176371 | orchestrator | 2025-05-13 19:33:04.176760 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:04.181827 | orchestrator | 2025-05-13 19:33:04.182308 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-6.yml] *************** 2025-05-13 19:33:04.226736 | orchestrator | 2025-05-13 19:33:04.226874 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:05.555367 | orchestrator | changed: [localhost] 2025-05-13 19:33:05.555500 | orchestrator | 2025-05-13 19:33:05.555596 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 19:33:06.285538 | orchestrator | changed: [localhost] 2025-05-13 19:33:06.288504 | orchestrator | 2025-05-13 19:33:06.291841 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 19:33:06.381373 | orchestrator | changed: [localhost] 2025-05-13 19:33:06.391391 | orchestrator | 2025-05-13 19:33:06.391573 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:06.514440 | orchestrator | changed: [localhost] 2025-05-13 19:33:06.517254 | orchestrator | 2025-05-13 19:33:06.518182 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:07.540003 | orchestrator | changed: [localhost] 2025-05-13 19:33:07.540152 | orchestrator | 2025-05-13 19:33:07.540180 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 19:33:08.041034 | orchestrator | changed: [localhost] 2025-05-13 19:33:08.043515 | orchestrator | 2025-05-13 19:33:08.046448 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 19:33:08.543466 | orchestrator | changed: [localhost] 2025-05-13 19:33:08.547397 | orchestrator | 2025-05-13 19:33:08.547847 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:08.874306 | orchestrator | changed: [localhost] 2025-05-13 19:33:08.878625 | orchestrator | 2025-05-13 19:33:08.878671 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:33:09.053108 | orchestrator | changed: [localhost] 2025-05-13 19:33:09.059105 | orchestrator | 2025-05-13 19:33:09.059197 | orchestrator | TASK [Manage NetBox resource testbed-node-9 of type device] ******************** 2025-05-13 19:33:09.467871 | orchestrator | changed: [localhost] 2025-05-13 19:33:09.471159 | orchestrator | 2025-05-13 19:33:09.471587 | orchestrator | TASK [Manage NetBox resource testbed-node-3 of type device] ******************** 2025-05-13 19:33:10.640811 | orchestrator | changed: [localhost] 2025-05-13 19:33:10.641001 | orchestrator | 2025-05-13 19:33:10.642742 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:10.741769 | orchestrator | changed: [localhost] 2025-05-13 19:33:10.743890 | orchestrator | 2025-05-13 19:33:10.744568 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:33:11.015694 | orchestrator | changed: [localhost] 2025-05-13 19:33:11.023497 | orchestrator | 2025-05-13 19:33:11.023952 | orchestrator | TASK [Manage NetBox resource Ethernet0 of type device_interface] *************** 2025-05-13 19:33:11.338326 | orchestrator | changed: [localhost] 2025-05-13 19:33:11.342636 | orchestrator | 2025-05-13 19:33:11.344115 | orchestrator | TASK [Manage NetBox resource Ethernet0 of type device_interface] *************** 2025-05-13 19:33:12.419613 | orchestrator | changed: [localhost] 2025-05-13 19:33:12.419729 | orchestrator | 2025-05-13 19:33:12.420689 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 19:33:13.347098 | orchestrator | changed: [localhost] 2025-05-13 19:33:13.348984 | orchestrator | 2025-05-13 19:33:13.349166 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:33:13.499975 | orchestrator | changed: [localhost] 2025-05-13 19:33:13.511338 | orchestrator | 2025-05-13 19:33:13.511571 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:33:13.512150 | orchestrator | 2025-05-13 19:33:13 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:33:13.514276 | orchestrator | 2025-05-13 19:33:13 | INFO  | Please wait and do not abort execution. 2025-05-13 19:33:13.518249 | orchestrator | localhost : ok=10 changed=10 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:33:13.787762 | orchestrator | 2025-05-13 19:33:13 | INFO  | Handle file /netbox/resources/300-testbed-switch-2.yml 2025-05-13 19:33:14.222143 | orchestrator | changed: [localhost] 2025-05-13 19:33:14.227709 | orchestrator | 2025-05-13 19:33:14.228137 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 19:33:14.295971 | orchestrator | changed: [localhost] 2025-05-13 19:33:14.298617 | orchestrator | 2025-05-13 19:33:14.299452 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:33:14.299498 | orchestrator | 2025-05-13 19:33:14 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:33:14.299514 | orchestrator | 2025-05-13 19:33:14 | INFO  | Please wait and do not abort execution. 2025-05-13 19:33:14.302268 | orchestrator | localhost : ok=10 changed=10 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:33:14.556004 | orchestrator | 2025-05-13 19:33:14 | INFO  | Handle file /netbox/resources/300-testbed-node-5.yml 2025-05-13 19:33:14.912488 | orchestrator | 2025-05-13 19:33:14.916758 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-switch-2.yml] ************* 2025-05-13 19:33:14.959132 | orchestrator | 2025-05-13 19:33:14.960455 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:15.051283 | orchestrator | changed: [localhost] 2025-05-13 19:33:15.051387 | orchestrator | 2025-05-13 19:33:15.051402 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:33:15.622291 | orchestrator | changed: [localhost] 2025-05-13 19:33:15.625450 | orchestrator | 2025-05-13 19:33:15.625591 | orchestrator | TASK [Manage NetBox resource testbed-node-1 of type device] ******************** 2025-05-13 19:33:15.648668 | orchestrator | 2025-05-13 19:33:15.649748 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-5.yml] *************** 2025-05-13 19:33:15.705298 | orchestrator | 2025-05-13 19:33:15.706678 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:16.630887 | orchestrator | changed: [localhost] 2025-05-13 19:33:16.633124 | orchestrator | 2025-05-13 19:33:16.635334 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 19:33:17.262141 | orchestrator | changed: [localhost] 2025-05-13 19:33:17.265214 | orchestrator | 2025-05-13 19:33:17.265408 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:17.945677 | orchestrator | changed: [localhost] 2025-05-13 19:33:17.946467 | orchestrator | 2025-05-13 19:33:17.946498 | orchestrator | TASK [Manage NetBox resource Ethernet0 of type device_interface] *************** 2025-05-13 19:33:18.095566 | orchestrator | changed: [localhost] 2025-05-13 19:33:18.099191 | orchestrator | 2025-05-13 19:33:18.099449 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:18.385725 | orchestrator | changed: [localhost] 2025-05-13 19:33:18.387095 | orchestrator | 2025-05-13 19:33:18.387346 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 19:33:19.426283 | orchestrator | changed: [localhost] 2025-05-13 19:33:19.432782 | orchestrator | 2025-05-13 19:33:19.433569 | orchestrator | TASK [Manage NetBox resource Management1 of type device_interface] ************* 2025-05-13 19:33:20.098571 | orchestrator | changed: [localhost] 2025-05-13 19:33:20.101870 | orchestrator | 2025-05-13 19:33:20.101980 | orchestrator | TASK [Manage NetBox resource testbed-node-6 of type device] ******************** 2025-05-13 19:33:20.465795 | orchestrator | changed: [localhost] 2025-05-13 19:33:20.465938 | orchestrator | 2025-05-13 19:33:20.465970 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:33:20.466169 | orchestrator | 2025-05-13 19:33:20 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:33:20.466189 | orchestrator | 2025-05-13 19:33:20 | INFO  | Please wait and do not abort execution. 2025-05-13 19:33:20.473499 | orchestrator | localhost : ok=10 changed=10 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:33:20.496457 | orchestrator | changed: [localhost] 2025-05-13 19:33:20.496973 | orchestrator | 2025-05-13 19:33:20.499733 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:20.739840 | orchestrator | 2025-05-13 19:33:20 | INFO  | Handle file /netbox/resources/300-testbed-node-8.yml 2025-05-13 19:33:21.814802 | orchestrator | 2025-05-13 19:33:21.816813 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-8.yml] *************** 2025-05-13 19:33:21.870807 | orchestrator | 2025-05-13 19:33:21.870908 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:21.950346 | orchestrator | changed: [localhost] 2025-05-13 19:33:21.954712 | orchestrator | 2025-05-13 19:33:21.955425 | orchestrator | TASK [Manage NetBox resource Ethernet0 of type device_interface] *************** 2025-05-13 19:33:22.238134 | orchestrator | changed: [localhost] 2025-05-13 19:33:22.245172 | orchestrator | 2025-05-13 19:33:22.245349 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:33:23.168029 | orchestrator | changed: [localhost] 2025-05-13 19:33:23.168174 | orchestrator | 2025-05-13 19:33:23.168210 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:24.216051 | orchestrator | changed: [localhost] 2025-05-13 19:33:24.216997 | orchestrator | 2025-05-13 19:33:24.217169 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:24.478336 | orchestrator | changed: [localhost] 2025-05-13 19:33:24.478447 | orchestrator | 2025-05-13 19:33:24.478838 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:33:24.478886 | orchestrator | 2025-05-13 19:33:24 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:33:24.478983 | orchestrator | 2025-05-13 19:33:24 | INFO  | Please wait and do not abort execution. 2025-05-13 19:33:24.480803 | orchestrator | localhost : ok=10 changed=10 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:33:24.538850 | orchestrator | changed: [localhost] 2025-05-13 19:33:24.545150 | orchestrator | 2025-05-13 19:33:24.545223 | orchestrator | TASK [Manage NetBox resource testbed-switch-2 of type device] ****************** 2025-05-13 19:33:24.722274 | orchestrator | 2025-05-13 19:33:24 | INFO  | Handle file /netbox/resources/300-testbed-node-0.yml 2025-05-13 19:33:25.259767 | orchestrator | changed: [localhost] 2025-05-13 19:33:25.262238 | orchestrator | 2025-05-13 19:33:25.262273 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:33:25.847019 | orchestrator | 2025-05-13 19:33:25.847189 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-0.yml] *************** 2025-05-13 19:33:25.899832 | orchestrator | 2025-05-13 19:33:25.901262 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:26.222545 | orchestrator | changed: [localhost] 2025-05-13 19:33:26.226238 | orchestrator | 2025-05-13 19:33:26.226751 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 19:33:26.270927 | orchestrator | changed: [localhost] 2025-05-13 19:33:26.274195 | orchestrator | 2025-05-13 19:33:26.274644 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:27.049360 | orchestrator | changed: [localhost] 2025-05-13 19:33:27.049534 | orchestrator | 2025-05-13 19:33:27.050360 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:33:27.976282 | orchestrator | changed: [localhost] 2025-05-13 19:33:27.976400 | orchestrator | 2025-05-13 19:33:27.978830 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:33:27.978988 | orchestrator | 2025-05-13 19:33:27 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:33:27.979110 | orchestrator | 2025-05-13 19:33:27 | INFO  | Please wait and do not abort execution. 2025-05-13 19:33:27.979446 | orchestrator | localhost : ok=6 changed=6 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:33:28.236210 | orchestrator | 2025-05-13 19:33:28 | INFO  | Handle file /netbox/resources/300-testbed-manager.yml 2025-05-13 19:33:28.331762 | orchestrator | changed: [localhost] 2025-05-13 19:33:28.337127 | orchestrator | 2025-05-13 19:33:28.339261 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:28.598490 | orchestrator | changed: [localhost] 2025-05-13 19:33:28.603860 | orchestrator | 2025-05-13 19:33:28.606939 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 19:33:28.926658 | orchestrator | changed: [localhost] 2025-05-13 19:33:28.927281 | orchestrator | 2025-05-13 19:33:28.931924 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:29.486316 | orchestrator | 2025-05-13 19:33:29.486562 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-manager.yml] ************** 2025-05-13 19:33:29.539724 | orchestrator | 2025-05-13 19:33:29.539951 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:30.381608 | orchestrator | changed: [localhost] 2025-05-13 19:33:30.383090 | orchestrator | 2025-05-13 19:33:30.384554 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 19:33:30.433029 | orchestrator | changed: [localhost] 2025-05-13 19:33:30.434621 | orchestrator | 2025-05-13 19:33:30.435000 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:31.572696 | orchestrator | changed: [localhost] 2025-05-13 19:33:31.578086 | orchestrator | 2025-05-13 19:33:31.579533 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:33:32.663954 | orchestrator | changed: [localhost] 2025-05-13 19:33:32.664810 | orchestrator | 2025-05-13 19:33:32.664853 | orchestrator | TASK [Manage NetBox resource testbed-node-5 of type device] ******************** 2025-05-13 19:33:32.691265 | orchestrator | changed: [localhost] 2025-05-13 19:33:32.701146 | orchestrator | 2025-05-13 19:33:32.701317 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:32.825608 | orchestrator | changed: [localhost] 2025-05-13 19:33:32.838372 | orchestrator | 2025-05-13 19:33:32.839607 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:33.553554 | orchestrator | changed: [localhost] 2025-05-13 19:33:33.555816 | orchestrator | 2025-05-13 19:33:33.556272 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:33:34.716189 | orchestrator | changed: [localhost] 2025-05-13 19:33:34.717286 | orchestrator | 2025-05-13 19:33:34.720916 | orchestrator | TASK [Manage NetBox resource Ethernet0 of type device_interface] *************** 2025-05-13 19:33:35.041444 | orchestrator | changed: [localhost] 2025-05-13 19:33:35.049695 | orchestrator | 2025-05-13 19:33:35.049830 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:35.101096 | orchestrator | changed: [localhost] 2025-05-13 19:33:35.106853 | orchestrator | 2025-05-13 19:33:35.107115 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 19:33:35.241916 | orchestrator | changed: [localhost] 2025-05-13 19:33:35.243647 | orchestrator | 2025-05-13 19:33:35.243744 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:33:36.776893 | orchestrator | changed: [localhost] 2025-05-13 19:33:36.780390 | orchestrator | 2025-05-13 19:33:36.780700 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 19:33:37.111516 | orchestrator | changed: [localhost] 2025-05-13 19:33:37.115491 | orchestrator | 2025-05-13 19:33:37.117527 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:37.196055 | orchestrator | changed: [localhost] 2025-05-13 19:33:37.201621 | orchestrator | 2025-05-13 19:33:37.203110 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:33:37.203221 | orchestrator | 2025-05-13 19:33:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:33:37.203234 | orchestrator | 2025-05-13 19:33:37 | INFO  | Please wait and do not abort execution. 2025-05-13 19:33:37.203826 | orchestrator | localhost : ok=10 changed=10 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:33:37.452235 | orchestrator | 2025-05-13 19:33:37 | INFO  | Handle file /netbox/resources/300-testbed-node-4.yml 2025-05-13 19:33:37.576813 | orchestrator | changed: [localhost] 2025-05-13 19:33:37.581379 | orchestrator | 2025-05-13 19:33:37.581499 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:33:38.190788 | orchestrator | changed: [localhost] 2025-05-13 19:33:38.192871 | orchestrator | 2025-05-13 19:33:38.193400 | orchestrator | TASK [Manage NetBox resource testbed-node-8 of type device] ******************** 2025-05-13 19:33:38.658524 | orchestrator | 2025-05-13 19:33:38.659829 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-4.yml] *************** 2025-05-13 19:33:38.721444 | orchestrator | 2025-05-13 19:33:38.723129 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:39.093008 | orchestrator | changed: [localhost] 2025-05-13 19:33:39.099340 | orchestrator | 2025-05-13 19:33:39.101946 | orchestrator | TASK [Manage NetBox resource testbed-node-0 of type device] ******************** 2025-05-13 19:33:39.201426 | orchestrator | changed: [localhost] 2025-05-13 19:33:39.207667 | orchestrator | 2025-05-13 19:33:39.207783 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:33:40.049843 | orchestrator | changed: [localhost] 2025-05-13 19:33:40.053596 | orchestrator | 2025-05-13 19:33:40.053686 | orchestrator | TASK [Manage NetBox resource Ethernet0 of type device_interface] *************** 2025-05-13 19:33:40.901065 | orchestrator | changed: [localhost] 2025-05-13 19:33:40.902716 | orchestrator | 2025-05-13 19:33:40.903212 | orchestrator | TASK [Manage NetBox resource Ethernet0 of type device_interface] *************** 2025-05-13 19:33:40.919821 | orchestrator | changed: [localhost] 2025-05-13 19:33:40.924633 | orchestrator | 2025-05-13 19:33:40.926446 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:41.565948 | orchestrator | changed: [localhost] 2025-05-13 19:33:41.567508 | orchestrator | 2025-05-13 19:33:41.567990 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:33:42.454422 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "msg": "Could not resolve id of primary_mac_address: 52:8F:1C:A3:D7:E9"} 2025-05-13 19:33:42.455176 | orchestrator | 2025-05-13 19:33:42.455555 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:33:42.455816 | orchestrator | 2025-05-13 19:33:42 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:33:42.455850 | orchestrator | 2025-05-13 19:33:42 | INFO  | Please wait and do not abort execution. 2025-05-13 19:33:42.458703 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 2025-05-13 19:33:42.699870 | orchestrator | 2025-05-13 19:33:42 | INFO  | Handle file /netbox/resources/300-testbed-node-7.yml 2025-05-13 19:33:43.049147 | orchestrator | changed: [localhost] 2025-05-13 19:33:43.052059 | orchestrator | 2025-05-13 19:33:43.052163 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:43.134953 | orchestrator | changed: [localhost] 2025-05-13 19:33:43.136045 | orchestrator | 2025-05-13 19:33:43.136145 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:33:43.136192 | orchestrator | 2025-05-13 19:33:43 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:33:43.136207 | orchestrator | 2025-05-13 19:33:43 | INFO  | Please wait and do not abort execution. 2025-05-13 19:33:43.138627 | orchestrator | localhost : ok=10 changed=10 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:33:43.214555 | orchestrator | changed: [localhost] 2025-05-13 19:33:43.215784 | orchestrator | 2025-05-13 19:33:43.216234 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 19:33:43.377983 | orchestrator | 2025-05-13 19:33:43 | INFO  | Handle file /netbox/resources/300-testbed-node-2.yml 2025-05-13 19:33:43.803005 | orchestrator | 2025-05-13 19:33:43.803712 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-7.yml] *************** 2025-05-13 19:33:43.856793 | orchestrator | 2025-05-13 19:33:43.856984 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:44.536714 | orchestrator | 2025-05-13 19:33:44.538626 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-2.yml] *************** 2025-05-13 19:33:44.607292 | orchestrator | 2025-05-13 19:33:44.608717 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:45.079951 | orchestrator | changed: [localhost] 2025-05-13 19:33:45.080760 | orchestrator | 2025-05-13 19:33:45.082636 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:45.324372 | orchestrator | changed: [localhost] 2025-05-13 19:33:45.324606 | orchestrator | 2025-05-13 19:33:45.324708 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 19:33:46.224885 | orchestrator | changed: [localhost] 2025-05-13 19:33:46.230625 | orchestrator | 2025-05-13 19:33:46.235589 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:46.819389 | orchestrator | changed: [localhost] 2025-05-13 19:33:46.823574 | orchestrator | 2025-05-13 19:33:46.823690 | orchestrator | TASK [Manage NetBox resource testbed-manager of type device] ******************* 2025-05-13 19:33:47.025587 | orchestrator | changed: [localhost] 2025-05-13 19:33:47.026614 | orchestrator | 2025-05-13 19:33:47.027345 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:47.708654 | orchestrator | changed: [localhost] 2025-05-13 19:33:47.716882 | orchestrator | 2025-05-13 19:33:47.716916 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:33:48.314485 | orchestrator | changed: [localhost] 2025-05-13 19:33:48.318865 | orchestrator | 2025-05-13 19:33:48.319490 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:48.589320 | orchestrator | changed: [localhost] 2025-05-13 19:33:48.603829 | orchestrator | 2025-05-13 19:33:48.604944 | orchestrator | TASK [Manage NetBox resource Ethernet0 of type device_interface] *************** 2025-05-13 19:33:49.281364 | orchestrator | changed: [localhost] 2025-05-13 19:33:49.287459 | orchestrator | 2025-05-13 19:33:49.290357 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:49.642158 | orchestrator | changed: [localhost] 2025-05-13 19:33:49.644667 | orchestrator | 2025-05-13 19:33:49.644871 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:33:50.606575 | orchestrator | changed: [localhost] 2025-05-13 19:33:50.617190 | orchestrator | 2025-05-13 19:33:50.618166 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:51.429045 | orchestrator | changed: [localhost] 2025-05-13 19:33:51.431870 | orchestrator | 2025-05-13 19:33:51.431927 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 19:33:51.797944 | orchestrator | changed: [localhost] 2025-05-13 19:33:51.798316 | orchestrator | 2025-05-13 19:33:51.799743 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:33:51.801195 | orchestrator | localhost : ok=10 changed=10 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:33:51.801264 | orchestrator | 2025-05-13 19:33:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:33:51.801279 | orchestrator | 2025-05-13 19:33:51 | INFO  | Please wait and do not abort execution. 2025-05-13 19:33:52.014824 | orchestrator | changed: [localhost] 2025-05-13 19:33:52.018550 | orchestrator | 2025-05-13 19:33:52.019001 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:52.035911 | orchestrator | 2025-05-13 19:33:52 | INFO  | Handle file /netbox/resources/300-testbed-switch-1.yml 2025-05-13 19:33:52.777907 | orchestrator | changed: [localhost] 2025-05-13 19:33:52.783500 | orchestrator | 2025-05-13 19:33:52.783885 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:33:53.113276 | orchestrator | 2025-05-13 19:33:53.113446 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-switch-1.yml] ************* 2025-05-13 19:33:53.162464 | orchestrator | 2025-05-13 19:33:53.162615 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-05-13 19:33:53.221794 | orchestrator | changed: [localhost] 2025-05-13 19:33:53.231936 | orchestrator | 2025-05-13 19:33:53.232321 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 19:33:54.074966 | orchestrator | changed: [localhost] 2025-05-13 19:33:54.076058 | orchestrator | 2025-05-13 19:33:54.076240 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:33:54.702543 | orchestrator | changed: [localhost] 2025-05-13 19:33:54.709121 | orchestrator | 2025-05-13 19:33:54.709371 | orchestrator | TASK [Manage NetBox resource testbed-node-4 of type device] ******************** 2025-05-13 19:33:55.543355 | orchestrator | changed: [localhost] 2025-05-13 19:33:55.547309 | orchestrator | 2025-05-13 19:33:55.548874 | orchestrator | TASK [Manage NetBox resource Management1 of type device_interface] ************* 2025-05-13 19:33:55.570591 | orchestrator | changed: [localhost] 2025-05-13 19:33:55.572453 | orchestrator | 2025-05-13 19:33:55.574451 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:33:55.833060 | orchestrator | changed: [localhost] 2025-05-13 19:33:55.841839 | orchestrator | 2025-05-13 19:33:55.842415 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:33:56.575203 | orchestrator | changed: [localhost] 2025-05-13 19:33:56.581997 | orchestrator | 2025-05-13 19:33:56.583254 | orchestrator | TASK [Manage NetBox resource Ethernet0 of type device_interface] *************** 2025-05-13 19:33:57.149843 | orchestrator | changed: [localhost] 2025-05-13 19:33:57.152042 | orchestrator | 2025-05-13 19:33:57.154741 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 19:33:57.513946 | orchestrator | changed: [localhost] 2025-05-13 19:33:57.515039 | orchestrator | 2025-05-13 19:33:57.515702 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 19:33:57.699024 | orchestrator | changed: [localhost] 2025-05-13 19:33:57.700883 | orchestrator | 2025-05-13 19:33:57.701472 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-05-13 19:33:58.903355 | orchestrator | changed: [localhost] 2025-05-13 19:33:58.905897 | orchestrator | 2025-05-13 19:33:58.906852 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 19:33:58.967609 | orchestrator | changed: [localhost] 2025-05-13 19:33:58.973585 | orchestrator | 2025-05-13 19:33:58.973915 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:33:58.974189 | orchestrator | 2025-05-13 19:33:58 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:33:58.974265 | orchestrator | 2025-05-13 19:33:58 | INFO  | Please wait and do not abort execution. 2025-05-13 19:33:58.975342 | orchestrator | localhost : ok=10 changed=10 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:33:59.291804 | orchestrator | changed: [localhost] 2025-05-13 19:33:59.298310 | orchestrator | 2025-05-13 19:33:59.298452 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 19:34:00.007462 | orchestrator | changed: [localhost] 2025-05-13 19:34:00.009116 | orchestrator | 2025-05-13 19:34:00.009636 | orchestrator | TASK [Manage NetBox resource testbed-switch-1 of type device] ****************** 2025-05-13 19:34:00.379177 | orchestrator | changed: [localhost] 2025-05-13 19:34:00.383163 | orchestrator | 2025-05-13 19:34:00.383443 | orchestrator | TASK [Manage NetBox resource testbed-node-7 of type device] ******************** 2025-05-13 19:34:00.686193 | orchestrator | changed: [localhost] 2025-05-13 19:34:00.686322 | orchestrator | 2025-05-13 19:34:00.686405 | orchestrator | TASK [Manage NetBox resource testbed-node-2 of type device] ******************** 2025-05-13 19:34:01.716141 | orchestrator | changed: [localhost] 2025-05-13 19:34:01.716332 | orchestrator | 2025-05-13 19:34:01.716434 | orchestrator | TASK [Manage NetBox resource of type mac_address] ****************************** 2025-05-13 19:34:02.315531 | orchestrator | changed: [localhost] 2025-05-13 19:34:02.316757 | orchestrator | 2025-05-13 19:34:02.317381 | orchestrator | TASK [Manage NetBox resource Ethernet0 of type device_interface] *************** 2025-05-13 19:34:03.238302 | orchestrator | changed: [localhost] 2025-05-13 19:34:03.241869 | orchestrator | 2025-05-13 19:34:03.243047 | orchestrator | TASK [Manage NetBox resource Ethernet0 of type device_interface] *************** 2025-05-13 19:34:03.532979 | orchestrator | changed: [localhost] 2025-05-13 19:34:03.533122 | orchestrator | 2025-05-13 19:34:03.533953 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:34:03.533981 | orchestrator | localhost : ok=5 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:34:03.534067 | orchestrator | 2025-05-13 19:34:03 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:34:03.534116 | orchestrator | 2025-05-13 19:34:03 | INFO  | Please wait and do not abort execution. 2025-05-13 19:34:05.286331 | orchestrator | changed: [localhost] 2025-05-13 19:34:05.286444 | orchestrator | 2025-05-13 19:34:05.286463 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:34:05.286531 | orchestrator | 2025-05-13 19:34:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:34:05.286548 | orchestrator | 2025-05-13 19:34:05 | INFO  | Please wait and do not abort execution. 2025-05-13 19:34:05.287344 | orchestrator | localhost : ok=10 changed=10 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:34:05.684202 | orchestrator | changed: [localhost] 2025-05-13 19:34:05.684557 | orchestrator | 2025-05-13 19:34:05.684667 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:34:05.685574 | orchestrator | 2025-05-13 19:34:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:34:05.685718 | orchestrator | 2025-05-13 19:34:05 | INFO  | Please wait and do not abort execution. 2025-05-13 19:34:05.685808 | orchestrator | localhost : ok=10 changed=10 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:34:05.923114 | orchestrator | 2025-05-13 19:34:05 | INFO  | Runtime: 156.6633s 2025-05-13 19:34:06.331635 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-05-13 19:34:06.522601 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-13 19:34:06.522741 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 4 minutes ago Up 4 minutes (healthy) 2025-05-13 19:34:06.522756 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 4 minutes ago Up 4 minutes (healthy) 2025-05-13 19:34:06.522768 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 4 minutes ago Up 4 minutes (healthy) 192.168.16.5:8000->8000/tcp 2025-05-13 19:34:06.522801 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server 4 minutes ago Up 4 minutes (healthy) 8000/tcp 2025-05-13 19:34:06.522813 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 4 minutes ago Up 4 minutes (healthy) 2025-05-13 19:34:06.522824 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" conductor 4 minutes ago Up 4 minutes (healthy) 2025-05-13 19:34:06.522835 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 4 minutes ago Up 4 minutes (healthy) 2025-05-13 19:34:06.522846 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 4 minutes ago Up 3 minutes (healthy) 2025-05-13 19:34:06.522857 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 4 minutes ago Up 4 minutes (healthy) 2025-05-13 19:34:06.522868 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb 4 minutes ago Up 4 minutes (healthy) 3306/tcp 2025-05-13 19:34:06.522879 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" netbox 4 minutes ago Up 4 minutes (healthy) 2025-05-13 19:34:06.522890 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 4 minutes ago Up 4 minutes (healthy) 2025-05-13 19:34:06.522901 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.3-alpine "docker-entrypoint.s…" redis 4 minutes ago Up 4 minutes (healthy) 6379/tcp 2025-05-13 19:34:06.522911 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" watchdog 4 minutes ago Up 4 minutes (healthy) 2025-05-13 19:34:06.522922 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 4 minutes ago Up 4 minutes (healthy) 2025-05-13 19:34:06.522933 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 4 minutes ago Up 4 minutes (healthy) 2025-05-13 19:34:06.522944 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 4 minutes ago Up 4 minutes (healthy) 2025-05-13 19:34:06.529692 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-05-13 19:34:06.679409 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-13 19:34:06.679541 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.2.2 "/usr/bin/tini -- /o…" netbox 11 minutes ago Up 10 minutes (healthy) 2025-05-13 19:34:06.679556 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.2.2 "/opt/netbox/venv/bi…" netbox-worker 11 minutes ago Up 6 minutes (healthy) 2025-05-13 19:34:06.679568 | orchestrator | netbox-postgres-1 registry.osism.tech/dockerhub/library/postgres:16.9-alpine "docker-entrypoint.s…" postgres 11 minutes ago Up 10 minutes (healthy) 5432/tcp 2025-05-13 19:34:06.679581 | orchestrator | netbox-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.3-alpine "docker-entrypoint.s…" redis 11 minutes ago Up 10 minutes (healthy) 6379/tcp 2025-05-13 19:34:06.688812 | orchestrator | ++ semver latest 7.0.0 2025-05-13 19:34:06.739563 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-13 19:34:06.739668 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-13 19:34:06.739684 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-05-13 19:34:06.743650 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-05-13 19:34:08.466256 | orchestrator | 2025-05-13 19:34:08 | INFO  | Task cdde6d73-3978-4b52-a63c-feebd3e59bfb (resolvconf) was prepared for execution. 2025-05-13 19:34:08.466381 | orchestrator | 2025-05-13 19:34:08 | INFO  | It takes a moment until task cdde6d73-3978-4b52-a63c-feebd3e59bfb (resolvconf) has been started and output is visible here. 2025-05-13 19:34:12.345714 | orchestrator | 2025-05-13 19:34:12.346514 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-05-13 19:34:12.347230 | orchestrator | 2025-05-13 19:34:12.348919 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-13 19:34:12.349554 | orchestrator | Tuesday 13 May 2025 19:34:12 +0000 (0:00:00.145) 0:00:00.145 *********** 2025-05-13 19:34:16.037976 | orchestrator | ok: [testbed-manager] 2025-05-13 19:34:16.038467 | orchestrator | 2025-05-13 19:34:16.039436 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-13 19:34:16.041756 | orchestrator | Tuesday 13 May 2025 19:34:16 +0000 (0:00:03.696) 0:00:03.841 *********** 2025-05-13 19:34:16.106359 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:34:16.106485 | orchestrator | 2025-05-13 19:34:16.106805 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-13 19:34:16.107753 | orchestrator | Tuesday 13 May 2025 19:34:16 +0000 (0:00:00.068) 0:00:03.910 *********** 2025-05-13 19:34:16.205631 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-05-13 19:34:16.205858 | orchestrator | 2025-05-13 19:34:16.206987 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-13 19:34:16.207547 | orchestrator | Tuesday 13 May 2025 19:34:16 +0000 (0:00:00.100) 0:00:04.010 *********** 2025-05-13 19:34:16.293527 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-05-13 19:34:16.294177 | orchestrator | 2025-05-13 19:34:16.295316 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-13 19:34:16.296223 | orchestrator | Tuesday 13 May 2025 19:34:16 +0000 (0:00:00.086) 0:00:04.097 *********** 2025-05-13 19:34:17.430191 | orchestrator | ok: [testbed-manager] 2025-05-13 19:34:17.430450 | orchestrator | 2025-05-13 19:34:17.430499 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-13 19:34:17.430963 | orchestrator | Tuesday 13 May 2025 19:34:17 +0000 (0:00:01.133) 0:00:05.231 *********** 2025-05-13 19:34:17.486985 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:34:17.487151 | orchestrator | 2025-05-13 19:34:17.487256 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-13 19:34:17.488443 | orchestrator | Tuesday 13 May 2025 19:34:17 +0000 (0:00:00.058) 0:00:05.290 *********** 2025-05-13 19:34:17.977597 | orchestrator | ok: [testbed-manager] 2025-05-13 19:34:17.977709 | orchestrator | 2025-05-13 19:34:17.977726 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-13 19:34:17.977739 | orchestrator | Tuesday 13 May 2025 19:34:17 +0000 (0:00:00.490) 0:00:05.780 *********** 2025-05-13 19:34:18.064426 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:34:18.064606 | orchestrator | 2025-05-13 19:34:18.064859 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-13 19:34:18.065990 | orchestrator | Tuesday 13 May 2025 19:34:18 +0000 (0:00:00.086) 0:00:05.867 *********** 2025-05-13 19:34:18.686243 | orchestrator | changed: [testbed-manager] 2025-05-13 19:34:18.686355 | orchestrator | 2025-05-13 19:34:18.686374 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-13 19:34:18.686804 | orchestrator | Tuesday 13 May 2025 19:34:18 +0000 (0:00:00.622) 0:00:06.489 *********** 2025-05-13 19:34:19.834713 | orchestrator | changed: [testbed-manager] 2025-05-13 19:34:19.834852 | orchestrator | 2025-05-13 19:34:19.836196 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-13 19:34:19.836411 | orchestrator | Tuesday 13 May 2025 19:34:19 +0000 (0:00:01.147) 0:00:07.636 *********** 2025-05-13 19:34:20.856545 | orchestrator | ok: [testbed-manager] 2025-05-13 19:34:20.856842 | orchestrator | 2025-05-13 19:34:20.857123 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-13 19:34:20.857736 | orchestrator | Tuesday 13 May 2025 19:34:20 +0000 (0:00:01.021) 0:00:08.658 *********** 2025-05-13 19:34:20.946951 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-05-13 19:34:20.947079 | orchestrator | 2025-05-13 19:34:20.947611 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-13 19:34:20.948519 | orchestrator | Tuesday 13 May 2025 19:34:20 +0000 (0:00:00.091) 0:00:08.750 *********** 2025-05-13 19:34:22.098693 | orchestrator | changed: [testbed-manager] 2025-05-13 19:34:22.098827 | orchestrator | 2025-05-13 19:34:22.100156 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:34:22.100412 | orchestrator | 2025-05-13 19:34:22 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:34:22.100439 | orchestrator | 2025-05-13 19:34:22 | INFO  | Please wait and do not abort execution. 2025-05-13 19:34:22.101274 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-13 19:34:22.101881 | orchestrator | 2025-05-13 19:34:22.104506 | orchestrator | 2025-05-13 19:34:22.105691 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 19:34:22.106353 | orchestrator | Tuesday 13 May 2025 19:34:22 +0000 (0:00:01.151) 0:00:09.901 *********** 2025-05-13 19:34:22.107467 | orchestrator | =============================================================================== 2025-05-13 19:34:22.107816 | orchestrator | Gathering Facts --------------------------------------------------------- 3.70s 2025-05-13 19:34:22.108418 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.15s 2025-05-13 19:34:22.108946 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.15s 2025-05-13 19:34:22.109520 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.13s 2025-05-13 19:34:22.110076 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.02s 2025-05-13 19:34:22.110673 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.62s 2025-05-13 19:34:22.111270 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.49s 2025-05-13 19:34:22.112247 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.10s 2025-05-13 19:34:22.113137 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-05-13 19:34:22.113480 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-05-13 19:34:22.114314 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-05-13 19:34:22.114341 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-05-13 19:34:22.114848 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-05-13 19:34:22.565587 | orchestrator | + osism apply sshconfig 2025-05-13 19:34:24.279670 | orchestrator | 2025-05-13 19:34:24 | INFO  | Task d018704d-b1b9-4ba8-b6a0-d61795f2e4e8 (sshconfig) was prepared for execution. 2025-05-13 19:34:24.279779 | orchestrator | 2025-05-13 19:34:24 | INFO  | It takes a moment until task d018704d-b1b9-4ba8-b6a0-d61795f2e4e8 (sshconfig) has been started and output is visible here. 2025-05-13 19:34:28.243079 | orchestrator | 2025-05-13 19:34:28.243241 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-05-13 19:34:28.246494 | orchestrator | 2025-05-13 19:34:28.247668 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-05-13 19:34:28.249127 | orchestrator | Tuesday 13 May 2025 19:34:28 +0000 (0:00:00.169) 0:00:00.169 *********** 2025-05-13 19:34:28.794068 | orchestrator | ok: [testbed-manager] 2025-05-13 19:34:28.794407 | orchestrator | 2025-05-13 19:34:28.795428 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-05-13 19:34:28.796236 | orchestrator | Tuesday 13 May 2025 19:34:28 +0000 (0:00:00.553) 0:00:00.723 *********** 2025-05-13 19:34:29.312520 | orchestrator | changed: [testbed-manager] 2025-05-13 19:34:29.313367 | orchestrator | 2025-05-13 19:34:29.314485 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-05-13 19:34:29.315264 | orchestrator | Tuesday 13 May 2025 19:34:29 +0000 (0:00:00.517) 0:00:01.240 *********** 2025-05-13 19:34:34.850843 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-05-13 19:34:34.852206 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-13 19:34:34.854198 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-05-13 19:34:34.855623 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-05-13 19:34:34.856227 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-05-13 19:34:34.858404 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-05-13 19:34:34.859036 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-05-13 19:34:34.859801 | orchestrator | 2025-05-13 19:34:34.860884 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-05-13 19:34:34.861437 | orchestrator | Tuesday 13 May 2025 19:34:34 +0000 (0:00:05.537) 0:00:06.778 *********** 2025-05-13 19:34:34.916536 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:34:34.917846 | orchestrator | 2025-05-13 19:34:34.919216 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-05-13 19:34:34.920493 | orchestrator | Tuesday 13 May 2025 19:34:34 +0000 (0:00:00.068) 0:00:06.846 *********** 2025-05-13 19:34:35.508879 | orchestrator | changed: [testbed-manager] 2025-05-13 19:34:35.508987 | orchestrator | 2025-05-13 19:34:35.510513 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:34:35.510563 | orchestrator | 2025-05-13 19:34:35 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:34:35.510578 | orchestrator | 2025-05-13 19:34:35 | INFO  | Please wait and do not abort execution. 2025-05-13 19:34:35.511544 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 19:34:35.512379 | orchestrator | 2025-05-13 19:34:35.512991 | orchestrator | 2025-05-13 19:34:35.513590 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 19:34:35.514559 | orchestrator | Tuesday 13 May 2025 19:34:35 +0000 (0:00:00.590) 0:00:07.437 *********** 2025-05-13 19:34:35.514938 | orchestrator | =============================================================================== 2025-05-13 19:34:35.515982 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.54s 2025-05-13 19:34:35.516298 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.59s 2025-05-13 19:34:35.518305 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.55s 2025-05-13 19:34:35.518887 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.52s 2025-05-13 19:34:35.519249 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-05-13 19:34:35.989425 | orchestrator | + osism apply known-hosts 2025-05-13 19:34:37.666281 | orchestrator | 2025-05-13 19:34:37 | INFO  | Task 1d859288-d821-44b1-b8a6-f4c93bc47a96 (known-hosts) was prepared for execution. 2025-05-13 19:34:37.666541 | orchestrator | 2025-05-13 19:34:37 | INFO  | It takes a moment until task 1d859288-d821-44b1-b8a6-f4c93bc47a96 (known-hosts) has been started and output is visible here. 2025-05-13 19:34:41.495647 | orchestrator | 2025-05-13 19:34:41.495779 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-05-13 19:34:41.496168 | orchestrator | 2025-05-13 19:34:41.498323 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-05-13 19:34:41.499349 | orchestrator | Tuesday 13 May 2025 19:34:41 +0000 (0:00:00.124) 0:00:00.124 *********** 2025-05-13 19:34:47.250299 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-13 19:34:47.251417 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-13 19:34:47.252776 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-13 19:34:47.253331 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-13 19:34:47.253859 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-13 19:34:47.255139 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-13 19:34:47.256515 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-13 19:34:47.257732 | orchestrator | 2025-05-13 19:34:47.258092 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-05-13 19:34:47.259427 | orchestrator | Tuesday 13 May 2025 19:34:47 +0000 (0:00:05.755) 0:00:05.880 *********** 2025-05-13 19:34:47.412755 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-13 19:34:47.413233 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-13 19:34:47.414168 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-13 19:34:47.415998 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-13 19:34:47.416092 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-13 19:34:47.416719 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-13 19:34:47.417899 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-13 19:34:47.418269 | orchestrator | 2025-05-13 19:34:47.419000 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 19:34:47.419501 | orchestrator | Tuesday 13 May 2025 19:34:47 +0000 (0:00:00.164) 0:00:06.044 *********** 2025-05-13 19:34:48.592021 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJQdRqClVGU+VuxzHEqh5TzzzVIYOrsHY6aLxD6xx+oNHMlYIxlzt7oSrFMF7j3dnphd5V+1LpBUBxfev4/y29k=) 2025-05-13 19:34:48.592989 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQClPIv6ETKKAXLmeb74ZjGpHsTfriFC3WIffndqitHlGY7Lkso47wIo5PeOgonfnjfTyRf+4J7DvZyGOFBkjpPiRaC1/fL42kLdZB+fTHNKGcIfD8rou+7d7pQ4zQryzhVMHW+U4TTq6IecV7lhuAYseskZ3rDGothtUtPDMNj9xnNYFsY0gaPUBexENmLg1FGSiRyBfGPrWUqq9vde0Ieq/tZb7FkmZYE1ftxWyYZfOPs/+YqtpJvJsPff2dImP1O9wb7k65SSPnsyipQ61NvTzV6igQn7Jcy/CwEdsigk2QH9v672jxOugHy2r/n/iF41hLQJCvhz+dspASuVaCYQ6ETcxOkNDIqINM6zQrozd5rsr9vUnr89QZFTS6befXOtqu2rxFKXiee90W2fVcfgtPr6ar1NhlWeN7S8D0FNQ+QQJYxtl0vuv9gbbSUxtmv00nyP0ygxy5VdL39YjpwM44VJmkUwlXLFJ3pYhrE0eQaI27pfHN1rpGNkBKgu4fk=) 2025-05-13 19:34:48.593093 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIApz1kfd+uSZjT3yCKRpSdnUw9QZYuAjzwwi6DKTg9YF) 2025-05-13 19:34:48.594335 | orchestrator | 2025-05-13 19:34:48.594452 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 19:34:48.595036 | orchestrator | Tuesday 13 May 2025 19:34:48 +0000 (0:00:01.178) 0:00:07.222 *********** 2025-05-13 19:34:49.652410 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI5D62ZLGObdKHnE4H9Nwe+T7nKf1EU18nJPtqMCHDpb115fymFih2z7QRLDiMIbPDzw2yRljDvtFZlct8Qs+cU=) 2025-05-13 19:34:49.653953 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDL9GP9YDN7X+Pr60CGwCSG86/dEKNA7ldCaALzyX6rGVamFNuFpAdReChPOv1oxzXPvkT/U4lXWA5OFCRtXWC86waNSitOvgqJhoqA/3Jb8na7kK5WagY1DjmVWr7NbzBEY7GTTcsBs9qYVZmzql7avSjqQyKoIKnldCGt+UMc6IuTV84o6NkrylJ18OmA/fmVWDMsImX1yfgUD9zXqyfH1lwrS54/C9YFyRbO86RzV3Kue9877LbVD7l/qxeQmJkGd7g87TDwP8dcRQ6FesKLanxxIiHrzYk1ac0imENm/Tij50hTneC2ZAetnt261eXch1FWgQHrzX+kI+8T+2KMj74yWutopkObnz+Qg09CUjZxKKVmf4AqXjQB1rd5oDbhNZz5TxXdigHP1zufYYazyrZ9XgK/nzs7bnYZDMCHs2/VWsZw4Y7/Vpu+jLki0WtRDw2OqSkvpScxy1HndGkfXgwtGW1dxd9Y2SGb/M2Hxy3NP39oxZYH1j7nFXo7FSM=) 2025-05-13 19:34:49.654597 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJIop43fNC9z2hiXXdT+AwdoLV7VQwIUEDeEJEhZhbNZ) 2025-05-13 19:34:49.655093 | orchestrator | 2025-05-13 19:34:49.655513 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 19:34:49.655862 | orchestrator | Tuesday 13 May 2025 19:34:49 +0000 (0:00:01.059) 0:00:08.282 *********** 2025-05-13 19:34:50.721736 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6TueT7oC2Orf40+9jUoSd9NhyHkRPD4N9SKo/SkJlSXCrM8fpSYfyUGf7d+RrbptSgM9DBZ4EJ+j8I0KkJxuHPF8eEMKAyvBSRryj1ImHLuyuG6W5lNPiYxQ3f3pXF2jn21Wl0yBjj2fn1rxddx8VPoRDZNjymb9cS2qAzV29oy08a79BQy2g4vUq1wo5kaUCaVsd5MxLaxyITPjOsqN5KA29VXKKArnANHQ/KV9a5s79T+dS09V4jqauDlDshqp7lfKTOVZtDMjiyNtwa8CfFo+qGZ2keBDVtPfVRuU4H1hlms+DE3POKs075pYSJ3qEcBM306rSQlxe+/lWJNaEkMI9PXSMu5H27YsUOTkOkcaJV8gIFG+/sfjseJaHSQFyVrL0S+wrJKOPG3Xm0DqskdLoNjh2tk057Hi4MfCiiJ4WToQPUs01zTf4B4js+/T5k6JpMk3UYtlz1ZGf64JsXruG2GKo2OQyO3vQhAuQXjyb0qu9fEyBZmQBBIpEb/U=) 2025-05-13 19:34:50.722081 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLRNDjuDE+8QZM/mrMmwzdjHQ+5uEl04J+pPpA2c34o1LEWRjy3EMZcfP3Pg1xmBX1feStRzURtZlWdb1QnNLG4=) 2025-05-13 19:34:50.722272 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFpTVM92TobOvuqXFmhGH+PTVijAtT3kCjo31XhxfA77) 2025-05-13 19:34:50.723485 | orchestrator | 2025-05-13 19:34:50.724686 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 19:34:50.725704 | orchestrator | Tuesday 13 May 2025 19:34:50 +0000 (0:00:01.067) 0:00:09.350 *********** 2025-05-13 19:34:51.773701 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTcLwXdFe8xWpoWtU1RrY+iPEdQ2Fim8YZeEPizQ+PwZVBn6d8ezTC6eu5+lQeq0nKdZBhO4Rdi/Wb+JjzqOXtTd1W/Q2ceQjbPM1f17sjqnb2VTc0nik4LUv9aSl7/PM9QR+8la2EUYPyhGfbXwH8eLt/27+LrzHr6CRzXgRz6MiYTch5q4m1FUtXVTT1ZkdU1JZmADUaXO4cAfES1Cbyho15QdMyE0awk4Cyr8U3vESwch7XcTGXZ+zmfsvX7YGxVq7pCgEmI07WxxF0glO5l4Upd48cr0AJXyI9hFS456PEj/3XS7+ticWNUx/SnKYW37PlekYtZ8iV/gxmeaoDjRKTFVapJcIj71DXIViNesaKu5J7wy6BDsAE9wmoU7axLiXAmrJBVPpg9i0i0ul/nvIjIcmGOs6ja6ES3CdrstGe267lLj1w82/p31/Ytlxjx8xt+rL2xVFS0DQq8wmRNPGVnv63a8yh5bYBy4WekqmL8z5tJSpKqJwMM3U1A3M=) 2025-05-13 19:34:51.775460 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII1XznpCNEbM6qJMLYJtkHy5n8Wfp9Tg1y7hllJ4fl5b) 2025-05-13 19:34:51.775505 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA25JKXTNLLFo2C69detNOT2s+G7Ky1V+pLCPpVXOFWs43ikFk6R5PQKEVfGM2CBejoEsZfgepAwDs129Amuoq8=) 2025-05-13 19:34:51.775745 | orchestrator | 2025-05-13 19:34:51.776033 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 19:34:51.776562 | orchestrator | Tuesday 13 May 2025 19:34:51 +0000 (0:00:01.051) 0:00:10.402 *********** 2025-05-13 19:34:52.826385 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4UF9tvBCFb8cJ2Cx6jyqgMfnmsLT9DavYiVXrJv6/s5x/haB3G2246Hi7i+BXqoNXOG+HTKGhDtqehVWy2FV2UL9hqLgidixPWTw4Y5yzrsFw8wt6LQkb+Z0jcryQcvNHuA1BRm2YBuBbB6nprm1it5Fa3CtBhT0Ir0D5mClx6MXeaprlEu/bmtOTe5HQRvFprDFCmilYol8p1qDJFngdhYaSEZfl6hSjoHXX4BTLh3Spb/783L/Uo80n9ZDzAasSSZiWNakoxarnAbDF5urJ7cvImy55Yn4lU81TugSzi+FGxhh+5te4oHyRGYzQiQXQEeHEFr/Eq4G68Ril4yhA6aPsab7hNAHSoYe1gqHAp+6T2CCu8xFXLxBYBsvXQE1h1Ri9zoxqiuk65tzOAMqSwzhwrqe6AUoNNavsQEyGSA27AniFtXrIh5+S1UH7D6sOa1gpWw7nuOmoy+HSFnQ6Oep2PcnqZfNn/O1hfi4Jd7V/Rz4d8c2S5/ArvDu/O6U=) 2025-05-13 19:34:52.827308 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOcexYWh7LAFMQntPGk6ytEqo6gX5v/mM4Gg/niskLUOii0GBwpKsaQEKK7s/qjIKsYZRRFcn4w48s5LwJo1CuY=) 2025-05-13 19:34:52.828035 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIx4h0JfY4nFJN1ZkM9SP/TT+6+WgjdVaR+T0SebeRVf) 2025-05-13 19:34:52.828975 | orchestrator | 2025-05-13 19:34:52.829777 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 19:34:52.830256 | orchestrator | Tuesday 13 May 2025 19:34:52 +0000 (0:00:01.053) 0:00:11.455 *********** 2025-05-13 19:34:53.922910 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCcLCnwZfULP8ylWf8sPevE505BtqWmmH/Ue0UUepl2WXp4q7LM93RpmSSuP8rgk1SrjU1mrRbmxaiKf8xYlcgMkaBgSJm+HTbTZig8at+W3dw1InU+nKB08ikMbHyheTTFyFmczR4pJpOsQJyCrh6AKYaLcpjvNWzy1ctCGNegJ+RP5dmNekg+pJ5FfMGwBQcrYOtwzxp/wFK7wU6+V3AboWOEnTfnuQCNltGjnbMyN0PVgwNvZh79VzlpPKFQopwIYc1vt8IlVaJRfsl1INioblhQLZdOdQJMJOeMUU7287JBItEiPgLdBFcrlQXfjAP6plSSAc7oeQ8xiygCTp4HCR1xY+nfbOtdvZD0XzZ9y5AYkZoERTk9YJMwok4Jm11ec4JQ7iwxg6tvuqfzH3dIXJhrKKKWMon98EK+wb4L6QRJOEG2oTMCYgEfgfaYfsZTOa43YGxgTJAr/WfXFThcabsEKfirUQkgDQLOonvHzmYYrz5dE/9HSQFahlsHmkU=) 2025-05-13 19:34:53.923578 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBECN0cBRIZjloJ2QQamW4IimZF+MlVIYgsaW13tHOglhcI5wpTCp9luK6dQj5aKIAVWveDQtdGOl8ynzHhwWbcc=) 2025-05-13 19:34:53.924180 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEcSrN2KGosAlcsqZjjkCGivxn5hLJrTCHgu0OlpBWBz) 2025-05-13 19:34:53.924695 | orchestrator | 2025-05-13 19:34:53.925275 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 19:34:53.926328 | orchestrator | Tuesday 13 May 2025 19:34:53 +0000 (0:00:01.096) 0:00:12.551 *********** 2025-05-13 19:34:54.974442 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID+2HWRrlapc/zvhRKeIV4IZbeQXor+7VFZkHe30jqdU) 2025-05-13 19:34:54.974717 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLtvFCmHpPQQwBb7Lf9pswoKaH/Vyg7N8cBAj79TlFLzNy04k1DEGIvrCdxDo2JQLZPV32dXqzbRK3aQDuodyESR2/0o2jxHKFHa3v1vSKoGst6zaQ2T5MbEATkinsJ3iOqXG+Qita8c/W3o8vWvITXezOzudGnTtSnXqLzlyAgS2OW409Bwfx+IDmwLEv4krqy8a+4r+GUW08oy5bOCFnoKGnk571FaW4o0BfqUoZiC6Hxu5E+6821Ykl5uDK8rKe5yqVja+X/vIyq/iGP2etWayoph6/q2lcd5HEkgh8uyEBKmdIQutpLuaUUYd/R4dlxlaeSSaMGaOTb8AapMXL9Fs/V3PNbNSF5twaaQRWBiM1TeZiK0Cb/1I7thY6xoyG7meiddWni1jjtk0cFUL7vRiSRAJJFUl6EaGG9B7HUem15TXsqneC+oq/NotdeoMVcHxjP7elr1fPm/CLf3Tad+LldNUvhiHUEW4d8OR78DTVzXwcSfZDFQYdGmKJ5/k=) 2025-05-13 19:34:54.975555 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAPkPPlFq8oE7+eJ45eb1Y3HlO1yFd2gK8G1aZ57qxmT/U8T0aO68XriYmcAKziCntOUKtUlNRIkSZzfvhZOMSo=) 2025-05-13 19:34:54.976644 | orchestrator | 2025-05-13 19:34:54.977536 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-05-13 19:34:54.978216 | orchestrator | Tuesday 13 May 2025 19:34:54 +0000 (0:00:01.051) 0:00:13.603 *********** 2025-05-13 19:35:00.310469 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-13 19:35:00.316865 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-13 19:35:00.316947 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-13 19:35:00.318596 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-13 19:35:00.318640 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-13 19:35:00.318900 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-13 19:35:00.319294 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-13 19:35:00.321180 | orchestrator | 2025-05-13 19:35:00.321268 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-05-13 19:35:00.321622 | orchestrator | Tuesday 13 May 2025 19:35:00 +0000 (0:00:05.335) 0:00:18.939 *********** 2025-05-13 19:35:00.485363 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-13 19:35:00.485754 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-13 19:35:00.486907 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-13 19:35:00.488498 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-13 19:35:00.488562 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-13 19:35:00.489021 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-13 19:35:00.489942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-13 19:35:00.490968 | orchestrator | 2025-05-13 19:35:00.491683 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 19:35:00.493069 | orchestrator | Tuesday 13 May 2025 19:35:00 +0000 (0:00:00.177) 0:00:19.116 *********** 2025-05-13 19:35:01.591011 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQClPIv6ETKKAXLmeb74ZjGpHsTfriFC3WIffndqitHlGY7Lkso47wIo5PeOgonfnjfTyRf+4J7DvZyGOFBkjpPiRaC1/fL42kLdZB+fTHNKGcIfD8rou+7d7pQ4zQryzhVMHW+U4TTq6IecV7lhuAYseskZ3rDGothtUtPDMNj9xnNYFsY0gaPUBexENmLg1FGSiRyBfGPrWUqq9vde0Ieq/tZb7FkmZYE1ftxWyYZfOPs/+YqtpJvJsPff2dImP1O9wb7k65SSPnsyipQ61NvTzV6igQn7Jcy/CwEdsigk2QH9v672jxOugHy2r/n/iF41hLQJCvhz+dspASuVaCYQ6ETcxOkNDIqINM6zQrozd5rsr9vUnr89QZFTS6befXOtqu2rxFKXiee90W2fVcfgtPr6ar1NhlWeN7S8D0FNQ+QQJYxtl0vuv9gbbSUxtmv00nyP0ygxy5VdL39YjpwM44VJmkUwlXLFJ3pYhrE0eQaI27pfHN1rpGNkBKgu4fk=) 2025-05-13 19:35:01.591217 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJQdRqClVGU+VuxzHEqh5TzzzVIYOrsHY6aLxD6xx+oNHMlYIxlzt7oSrFMF7j3dnphd5V+1LpBUBxfev4/y29k=) 2025-05-13 19:35:01.591237 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIApz1kfd+uSZjT3yCKRpSdnUw9QZYuAjzwwi6DKTg9YF) 2025-05-13 19:35:01.592055 | orchestrator | 2025-05-13 19:35:01.593759 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 19:35:01.595087 | orchestrator | Tuesday 13 May 2025 19:35:01 +0000 (0:00:01.105) 0:00:20.222 *********** 2025-05-13 19:35:02.729002 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDL9GP9YDN7X+Pr60CGwCSG86/dEKNA7ldCaALzyX6rGVamFNuFpAdReChPOv1oxzXPvkT/U4lXWA5OFCRtXWC86waNSitOvgqJhoqA/3Jb8na7kK5WagY1DjmVWr7NbzBEY7GTTcsBs9qYVZmzql7avSjqQyKoIKnldCGt+UMc6IuTV84o6NkrylJ18OmA/fmVWDMsImX1yfgUD9zXqyfH1lwrS54/C9YFyRbO86RzV3Kue9877LbVD7l/qxeQmJkGd7g87TDwP8dcRQ6FesKLanxxIiHrzYk1ac0imENm/Tij50hTneC2ZAetnt261eXch1FWgQHrzX+kI+8T+2KMj74yWutopkObnz+Qg09CUjZxKKVmf4AqXjQB1rd5oDbhNZz5TxXdigHP1zufYYazyrZ9XgK/nzs7bnYZDMCHs2/VWsZw4Y7/Vpu+jLki0WtRDw2OqSkvpScxy1HndGkfXgwtGW1dxd9Y2SGb/M2Hxy3NP39oxZYH1j7nFXo7FSM=) 2025-05-13 19:35:02.730172 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI5D62ZLGObdKHnE4H9Nwe+T7nKf1EU18nJPtqMCHDpb115fymFih2z7QRLDiMIbPDzw2yRljDvtFZlct8Qs+cU=) 2025-05-13 19:35:02.730921 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJIop43fNC9z2hiXXdT+AwdoLV7VQwIUEDeEJEhZhbNZ) 2025-05-13 19:35:02.731369 | orchestrator | 2025-05-13 19:35:02.731838 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 19:35:02.732504 | orchestrator | Tuesday 13 May 2025 19:35:02 +0000 (0:00:01.136) 0:00:21.358 *********** 2025-05-13 19:35:03.813314 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFpTVM92TobOvuqXFmhGH+PTVijAtT3kCjo31XhxfA77) 2025-05-13 19:35:03.813692 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6TueT7oC2Orf40+9jUoSd9NhyHkRPD4N9SKo/SkJlSXCrM8fpSYfyUGf7d+RrbptSgM9DBZ4EJ+j8I0KkJxuHPF8eEMKAyvBSRryj1ImHLuyuG6W5lNPiYxQ3f3pXF2jn21Wl0yBjj2fn1rxddx8VPoRDZNjymb9cS2qAzV29oy08a79BQy2g4vUq1wo5kaUCaVsd5MxLaxyITPjOsqN5KA29VXKKArnANHQ/KV9a5s79T+dS09V4jqauDlDshqp7lfKTOVZtDMjiyNtwa8CfFo+qGZ2keBDVtPfVRuU4H1hlms+DE3POKs075pYSJ3qEcBM306rSQlxe+/lWJNaEkMI9PXSMu5H27YsUOTkOkcaJV8gIFG+/sfjseJaHSQFyVrL0S+wrJKOPG3Xm0DqskdLoNjh2tk057Hi4MfCiiJ4WToQPUs01zTf4B4js+/T5k6JpMk3UYtlz1ZGf64JsXruG2GKo2OQyO3vQhAuQXjyb0qu9fEyBZmQBBIpEb/U=) 2025-05-13 19:35:03.813728 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLRNDjuDE+8QZM/mrMmwzdjHQ+5uEl04J+pPpA2c34o1LEWRjy3EMZcfP3Pg1xmBX1feStRzURtZlWdb1QnNLG4=) 2025-05-13 19:35:03.814866 | orchestrator | 2025-05-13 19:35:03.815354 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 19:35:03.816000 | orchestrator | Tuesday 13 May 2025 19:35:03 +0000 (0:00:01.084) 0:00:22.443 *********** 2025-05-13 19:35:04.884579 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTcLwXdFe8xWpoWtU1RrY+iPEdQ2Fim8YZeEPizQ+PwZVBn6d8ezTC6eu5+lQeq0nKdZBhO4Rdi/Wb+JjzqOXtTd1W/Q2ceQjbPM1f17sjqnb2VTc0nik4LUv9aSl7/PM9QR+8la2EUYPyhGfbXwH8eLt/27+LrzHr6CRzXgRz6MiYTch5q4m1FUtXVTT1ZkdU1JZmADUaXO4cAfES1Cbyho15QdMyE0awk4Cyr8U3vESwch7XcTGXZ+zmfsvX7YGxVq7pCgEmI07WxxF0glO5l4Upd48cr0AJXyI9hFS456PEj/3XS7+ticWNUx/SnKYW37PlekYtZ8iV/gxmeaoDjRKTFVapJcIj71DXIViNesaKu5J7wy6BDsAE9wmoU7axLiXAmrJBVPpg9i0i0ul/nvIjIcmGOs6ja6ES3CdrstGe267lLj1w82/p31/Ytlxjx8xt+rL2xVFS0DQq8wmRNPGVnv63a8yh5bYBy4WekqmL8z5tJSpKqJwMM3U1A3M=) 2025-05-13 19:35:04.884818 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA25JKXTNLLFo2C69detNOT2s+G7Ky1V+pLCPpVXOFWs43ikFk6R5PQKEVfGM2CBejoEsZfgepAwDs129Amuoq8=) 2025-05-13 19:35:04.885644 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII1XznpCNEbM6qJMLYJtkHy5n8Wfp9Tg1y7hllJ4fl5b) 2025-05-13 19:35:04.886650 | orchestrator | 2025-05-13 19:35:04.887444 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 19:35:04.888075 | orchestrator | Tuesday 13 May 2025 19:35:04 +0000 (0:00:01.069) 0:00:23.513 *********** 2025-05-13 19:35:05.931670 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4UF9tvBCFb8cJ2Cx6jyqgMfnmsLT9DavYiVXrJv6/s5x/haB3G2246Hi7i+BXqoNXOG+HTKGhDtqehVWy2FV2UL9hqLgidixPWTw4Y5yzrsFw8wt6LQkb+Z0jcryQcvNHuA1BRm2YBuBbB6nprm1it5Fa3CtBhT0Ir0D5mClx6MXeaprlEu/bmtOTe5HQRvFprDFCmilYol8p1qDJFngdhYaSEZfl6hSjoHXX4BTLh3Spb/783L/Uo80n9ZDzAasSSZiWNakoxarnAbDF5urJ7cvImy55Yn4lU81TugSzi+FGxhh+5te4oHyRGYzQiQXQEeHEFr/Eq4G68Ril4yhA6aPsab7hNAHSoYe1gqHAp+6T2CCu8xFXLxBYBsvXQE1h1Ri9zoxqiuk65tzOAMqSwzhwrqe6AUoNNavsQEyGSA27AniFtXrIh5+S1UH7D6sOa1gpWw7nuOmoy+HSFnQ6Oep2PcnqZfNn/O1hfi4Jd7V/Rz4d8c2S5/ArvDu/O6U=) 2025-05-13 19:35:05.932648 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIx4h0JfY4nFJN1ZkM9SP/TT+6+WgjdVaR+T0SebeRVf) 2025-05-13 19:35:05.932985 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOcexYWh7LAFMQntPGk6ytEqo6gX5v/mM4Gg/niskLUOii0GBwpKsaQEKK7s/qjIKsYZRRFcn4w48s5LwJo1CuY=) 2025-05-13 19:35:05.934402 | orchestrator | 2025-05-13 19:35:05.934643 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 19:35:05.935205 | orchestrator | Tuesday 13 May 2025 19:35:05 +0000 (0:00:01.046) 0:00:24.560 *********** 2025-05-13 19:35:07.021860 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCcLCnwZfULP8ylWf8sPevE505BtqWmmH/Ue0UUepl2WXp4q7LM93RpmSSuP8rgk1SrjU1mrRbmxaiKf8xYlcgMkaBgSJm+HTbTZig8at+W3dw1InU+nKB08ikMbHyheTTFyFmczR4pJpOsQJyCrh6AKYaLcpjvNWzy1ctCGNegJ+RP5dmNekg+pJ5FfMGwBQcrYOtwzxp/wFK7wU6+V3AboWOEnTfnuQCNltGjnbMyN0PVgwNvZh79VzlpPKFQopwIYc1vt8IlVaJRfsl1INioblhQLZdOdQJMJOeMUU7287JBItEiPgLdBFcrlQXfjAP6plSSAc7oeQ8xiygCTp4HCR1xY+nfbOtdvZD0XzZ9y5AYkZoERTk9YJMwok4Jm11ec4JQ7iwxg6tvuqfzH3dIXJhrKKKWMon98EK+wb4L6QRJOEG2oTMCYgEfgfaYfsZTOa43YGxgTJAr/WfXFThcabsEKfirUQkgDQLOonvHzmYYrz5dE/9HSQFahlsHmkU=) 2025-05-13 19:35:07.022681 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBECN0cBRIZjloJ2QQamW4IimZF+MlVIYgsaW13tHOglhcI5wpTCp9luK6dQj5aKIAVWveDQtdGOl8ynzHhwWbcc=) 2025-05-13 19:35:07.023154 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEcSrN2KGosAlcsqZjjkCGivxn5hLJrTCHgu0OlpBWBz) 2025-05-13 19:35:07.024453 | orchestrator | 2025-05-13 19:35:07.024481 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-13 19:35:07.024763 | orchestrator | Tuesday 13 May 2025 19:35:07 +0000 (0:00:01.090) 0:00:25.650 *********** 2025-05-13 19:35:08.097577 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLtvFCmHpPQQwBb7Lf9pswoKaH/Vyg7N8cBAj79TlFLzNy04k1DEGIvrCdxDo2JQLZPV32dXqzbRK3aQDuodyESR2/0o2jxHKFHa3v1vSKoGst6zaQ2T5MbEATkinsJ3iOqXG+Qita8c/W3o8vWvITXezOzudGnTtSnXqLzlyAgS2OW409Bwfx+IDmwLEv4krqy8a+4r+GUW08oy5bOCFnoKGnk571FaW4o0BfqUoZiC6Hxu5E+6821Ykl5uDK8rKe5yqVja+X/vIyq/iGP2etWayoph6/q2lcd5HEkgh8uyEBKmdIQutpLuaUUYd/R4dlxlaeSSaMGaOTb8AapMXL9Fs/V3PNbNSF5twaaQRWBiM1TeZiK0Cb/1I7thY6xoyG7meiddWni1jjtk0cFUL7vRiSRAJJFUl6EaGG9B7HUem15TXsqneC+oq/NotdeoMVcHxjP7elr1fPm/CLf3Tad+LldNUvhiHUEW4d8OR78DTVzXwcSfZDFQYdGmKJ5/k=) 2025-05-13 19:35:08.097874 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAPkPPlFq8oE7+eJ45eb1Y3HlO1yFd2gK8G1aZ57qxmT/U8T0aO68XriYmcAKziCntOUKtUlNRIkSZzfvhZOMSo=) 2025-05-13 19:35:08.098760 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID+2HWRrlapc/zvhRKeIV4IZbeQXor+7VFZkHe30jqdU) 2025-05-13 19:35:08.099216 | orchestrator | 2025-05-13 19:35:08.100213 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-05-13 19:35:08.100515 | orchestrator | Tuesday 13 May 2025 19:35:08 +0000 (0:00:01.077) 0:00:26.728 *********** 2025-05-13 19:35:08.477487 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-13 19:35:08.477812 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-13 19:35:08.478281 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-13 19:35:08.478314 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-13 19:35:08.479528 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-13 19:35:08.480159 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-13 19:35:08.480617 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-13 19:35:08.481354 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:35:08.481694 | orchestrator | 2025-05-13 19:35:08.483749 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-05-13 19:35:08.484879 | orchestrator | Tuesday 13 May 2025 19:35:08 +0000 (0:00:00.379) 0:00:27.108 *********** 2025-05-13 19:35:08.534342 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:35:08.534451 | orchestrator | 2025-05-13 19:35:08.535926 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-05-13 19:35:08.536688 | orchestrator | Tuesday 13 May 2025 19:35:08 +0000 (0:00:00.056) 0:00:27.165 *********** 2025-05-13 19:35:08.584667 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:35:08.585583 | orchestrator | 2025-05-13 19:35:08.586001 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-05-13 19:35:08.587290 | orchestrator | Tuesday 13 May 2025 19:35:08 +0000 (0:00:00.050) 0:00:27.215 *********** 2025-05-13 19:35:09.102099 | orchestrator | changed: [testbed-manager] 2025-05-13 19:35:09.103380 | orchestrator | 2025-05-13 19:35:09.103548 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:35:09.104723 | orchestrator | 2025-05-13 19:35:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:35:09.104754 | orchestrator | 2025-05-13 19:35:09 | INFO  | Please wait and do not abort execution. 2025-05-13 19:35:09.104910 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-13 19:35:09.105923 | orchestrator | 2025-05-13 19:35:09.106204 | orchestrator | 2025-05-13 19:35:09.106518 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 19:35:09.107186 | orchestrator | Tuesday 13 May 2025 19:35:09 +0000 (0:00:00.517) 0:00:27.733 *********** 2025-05-13 19:35:09.107685 | orchestrator | =============================================================================== 2025-05-13 19:35:09.108253 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.76s 2025-05-13 19:35:09.108458 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.34s 2025-05-13 19:35:09.108835 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2025-05-13 19:35:09.109084 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-05-13 19:35:09.109429 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-05-13 19:35:09.109791 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-05-13 19:35:09.110164 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-05-13 19:35:09.110437 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-05-13 19:35:09.110624 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-05-13 19:35:09.111186 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-05-13 19:35:09.111423 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-05-13 19:35:09.111689 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-05-13 19:35:09.112008 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-13 19:35:09.112329 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-13 19:35:09.112607 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-13 19:35:09.112937 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-13 19:35:09.113248 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.52s 2025-05-13 19:35:09.113620 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.38s 2025-05-13 19:35:09.113903 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2025-05-13 19:35:09.114747 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-05-13 19:35:09.549332 | orchestrator | + osism apply squid 2025-05-13 19:35:11.257578 | orchestrator | 2025-05-13 19:35:11 | INFO  | Task 0ec77791-88cc-43f0-a9e9-9ebc61f7375d (squid) was prepared for execution. 2025-05-13 19:35:11.257711 | orchestrator | 2025-05-13 19:35:11 | INFO  | It takes a moment until task 0ec77791-88cc-43f0-a9e9-9ebc61f7375d (squid) has been started and output is visible here. 2025-05-13 19:35:15.227420 | orchestrator | 2025-05-13 19:35:15.227627 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-05-13 19:35:15.228234 | orchestrator | 2025-05-13 19:35:15.230186 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-05-13 19:35:15.231062 | orchestrator | Tuesday 13 May 2025 19:35:15 +0000 (0:00:00.182) 0:00:00.182 *********** 2025-05-13 19:35:15.336259 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-05-13 19:35:15.337102 | orchestrator | 2025-05-13 19:35:15.338178 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-05-13 19:35:15.339732 | orchestrator | Tuesday 13 May 2025 19:35:15 +0000 (0:00:00.111) 0:00:00.294 *********** 2025-05-13 19:35:16.750721 | orchestrator | ok: [testbed-manager] 2025-05-13 19:35:16.751093 | orchestrator | 2025-05-13 19:35:16.751949 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-05-13 19:35:16.752900 | orchestrator | Tuesday 13 May 2025 19:35:16 +0000 (0:00:01.413) 0:00:01.707 *********** 2025-05-13 19:35:17.930642 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-05-13 19:35:17.930753 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-05-13 19:35:17.931918 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-05-13 19:35:17.932575 | orchestrator | 2025-05-13 19:35:17.933564 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-05-13 19:35:17.934631 | orchestrator | Tuesday 13 May 2025 19:35:17 +0000 (0:00:01.179) 0:00:02.887 *********** 2025-05-13 19:35:19.028461 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-05-13 19:35:19.028642 | orchestrator | 2025-05-13 19:35:19.030246 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-05-13 19:35:19.031132 | orchestrator | Tuesday 13 May 2025 19:35:19 +0000 (0:00:01.097) 0:00:03.984 *********** 2025-05-13 19:35:19.397054 | orchestrator | ok: [testbed-manager] 2025-05-13 19:35:19.397251 | orchestrator | 2025-05-13 19:35:19.398244 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-05-13 19:35:19.398468 | orchestrator | Tuesday 13 May 2025 19:35:19 +0000 (0:00:00.368) 0:00:04.353 *********** 2025-05-13 19:35:20.311591 | orchestrator | changed: [testbed-manager] 2025-05-13 19:35:20.311723 | orchestrator | 2025-05-13 19:35:20.312154 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-05-13 19:35:20.312414 | orchestrator | Tuesday 13 May 2025 19:35:20 +0000 (0:00:00.916) 0:00:05.269 *********** 2025-05-13 19:36:02.575727 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-05-13 19:36:02.575829 | orchestrator | ok: [testbed-manager] 2025-05-13 19:36:02.575908 | orchestrator | 2025-05-13 19:36:02.577073 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-05-13 19:36:02.578043 | orchestrator | Tuesday 13 May 2025 19:36:02 +0000 (0:00:42.260) 0:00:47.529 *********** 2025-05-13 19:36:39.826464 | orchestrator | changed: [testbed-manager] 2025-05-13 19:36:39.826581 | orchestrator | 2025-05-13 19:36:39.826599 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-05-13 19:36:39.826612 | orchestrator | Tuesday 13 May 2025 19:36:39 +0000 (0:00:37.246) 0:01:24.776 *********** 2025-05-13 19:37:39.905580 | orchestrator | Pausing for 60 seconds 2025-05-13 19:37:39.905766 | orchestrator | changed: [testbed-manager] 2025-05-13 19:37:39.905797 | orchestrator | 2025-05-13 19:37:39.905811 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-05-13 19:37:39.905940 | orchestrator | Tuesday 13 May 2025 19:37:39 +0000 (0:01:00.082) 0:02:24.858 *********** 2025-05-13 19:37:39.991842 | orchestrator | ok: [testbed-manager] 2025-05-13 19:37:39.992321 | orchestrator | 2025-05-13 19:37:39.993240 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-05-13 19:37:39.995019 | orchestrator | Tuesday 13 May 2025 19:37:39 +0000 (0:00:00.092) 0:02:24.950 *********** 2025-05-13 19:37:40.646289 | orchestrator | changed: [testbed-manager] 2025-05-13 19:37:40.647398 | orchestrator | 2025-05-13 19:37:40.647429 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:37:40.647777 | orchestrator | 2025-05-13 19:37:40 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:37:40.647981 | orchestrator | 2025-05-13 19:37:40 | INFO  | Please wait and do not abort execution. 2025-05-13 19:37:40.648963 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:37:40.649443 | orchestrator | 2025-05-13 19:37:40.650493 | orchestrator | 2025-05-13 19:37:40.650793 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 19:37:40.651154 | orchestrator | Tuesday 13 May 2025 19:37:40 +0000 (0:00:00.653) 0:02:25.604 *********** 2025-05-13 19:37:40.651536 | orchestrator | =============================================================================== 2025-05-13 19:37:40.651949 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-05-13 19:37:40.652432 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 42.26s 2025-05-13 19:37:40.652893 | orchestrator | osism.services.squid : Restart squid service --------------------------- 37.25s 2025-05-13 19:37:40.654090 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.41s 2025-05-13 19:37:40.655322 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.18s 2025-05-13 19:37:40.655359 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.10s 2025-05-13 19:37:40.656031 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.92s 2025-05-13 19:37:40.656562 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.65s 2025-05-13 19:37:40.657004 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2025-05-13 19:37:40.657374 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.11s 2025-05-13 19:37:40.657841 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.09s 2025-05-13 19:37:41.139600 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-13 19:37:41.139812 | orchestrator | ++ semver latest 9.0.0 2025-05-13 19:37:41.178749 | orchestrator | + [[ -1 -lt 0 ]] 2025-05-13 19:37:41.178794 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-13 19:37:41.179311 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-05-13 19:37:42.885289 | orchestrator | 2025-05-13 19:37:42 | INFO  | Task 4b794c56-8cbb-491c-be1d-66dbfd61b178 (operator) was prepared for execution. 2025-05-13 19:37:42.885398 | orchestrator | 2025-05-13 19:37:42 | INFO  | It takes a moment until task 4b794c56-8cbb-491c-be1d-66dbfd61b178 (operator) has been started and output is visible here. 2025-05-13 19:37:46.916723 | orchestrator | 2025-05-13 19:37:46.920146 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-05-13 19:37:46.920205 | orchestrator | 2025-05-13 19:37:46.920218 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-13 19:37:46.920230 | orchestrator | Tuesday 13 May 2025 19:37:46 +0000 (0:00:00.147) 0:00:00.147 *********** 2025-05-13 19:37:50.128809 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:37:50.129111 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:37:50.129890 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:37:50.131238 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:37:50.131262 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:37:50.131657 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:37:50.132287 | orchestrator | 2025-05-13 19:37:50.132794 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-05-13 19:37:50.133904 | orchestrator | Tuesday 13 May 2025 19:37:50 +0000 (0:00:03.213) 0:00:03.361 *********** 2025-05-13 19:37:50.917026 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:37:50.917292 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:37:50.917740 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:37:50.918099 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:37:50.918967 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:37:50.918992 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:37:50.920065 | orchestrator | 2025-05-13 19:37:50.921079 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-05-13 19:37:50.921811 | orchestrator | 2025-05-13 19:37:50.922896 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-13 19:37:50.923512 | orchestrator | Tuesday 13 May 2025 19:37:50 +0000 (0:00:00.789) 0:00:04.150 *********** 2025-05-13 19:37:50.995305 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:37:51.020422 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:37:51.047757 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:37:51.092760 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:37:51.093303 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:37:51.093772 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:37:51.094370 | orchestrator | 2025-05-13 19:37:51.095563 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-13 19:37:51.097592 | orchestrator | Tuesday 13 May 2025 19:37:51 +0000 (0:00:00.175) 0:00:04.326 *********** 2025-05-13 19:37:51.162886 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:37:51.189777 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:37:51.214305 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:37:51.255555 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:37:51.256881 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:37:51.258401 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:37:51.259982 | orchestrator | 2025-05-13 19:37:51.261331 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-13 19:37:51.262113 | orchestrator | Tuesday 13 May 2025 19:37:51 +0000 (0:00:00.162) 0:00:04.489 *********** 2025-05-13 19:37:51.848619 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:37:51.849577 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:37:51.850715 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:37:51.851233 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:37:51.852081 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:37:51.852758 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:37:51.853433 | orchestrator | 2025-05-13 19:37:51.854007 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-13 19:37:51.854873 | orchestrator | Tuesday 13 May 2025 19:37:51 +0000 (0:00:00.592) 0:00:05.081 *********** 2025-05-13 19:37:52.678130 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:37:52.678285 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:37:52.678424 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:37:52.679297 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:37:52.679824 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:37:52.680916 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:37:52.681212 | orchestrator | 2025-05-13 19:37:52.681935 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-13 19:37:52.682399 | orchestrator | Tuesday 13 May 2025 19:37:52 +0000 (0:00:00.828) 0:00:05.910 *********** 2025-05-13 19:37:53.840786 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-05-13 19:37:53.841576 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-05-13 19:37:53.843047 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-05-13 19:37:53.843765 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-05-13 19:37:53.844878 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-05-13 19:37:53.845484 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-05-13 19:37:53.846457 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-05-13 19:37:53.847266 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-05-13 19:37:53.848053 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-05-13 19:37:53.848817 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-05-13 19:37:53.849612 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-05-13 19:37:53.850099 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-05-13 19:37:53.850716 | orchestrator | 2025-05-13 19:37:53.851244 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-13 19:37:53.851858 | orchestrator | Tuesday 13 May 2025 19:37:53 +0000 (0:00:01.162) 0:00:07.073 *********** 2025-05-13 19:37:55.133103 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:37:55.133292 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:37:55.134532 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:37:55.136818 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:37:55.137635 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:37:55.138456 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:37:55.139530 | orchestrator | 2025-05-13 19:37:55.139941 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-13 19:37:55.140775 | orchestrator | Tuesday 13 May 2025 19:37:55 +0000 (0:00:01.291) 0:00:08.364 *********** 2025-05-13 19:37:56.316304 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-05-13 19:37:56.317049 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-05-13 19:37:56.317081 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-05-13 19:37:56.407788 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-05-13 19:37:56.408953 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-05-13 19:37:56.409674 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-05-13 19:37:56.410836 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-05-13 19:37:56.412443 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-05-13 19:37:56.413508 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-05-13 19:37:56.414806 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-05-13 19:37:56.415442 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-05-13 19:37:56.416250 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-05-13 19:37:56.417407 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-05-13 19:37:56.417967 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-05-13 19:37:56.418959 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-05-13 19:37:56.419534 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-05-13 19:37:56.420491 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-05-13 19:37:56.420926 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-05-13 19:37:56.421612 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-05-13 19:37:56.422106 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-05-13 19:37:56.422871 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-05-13 19:37:56.423789 | orchestrator | 2025-05-13 19:37:56.424432 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-13 19:37:56.425020 | orchestrator | Tuesday 13 May 2025 19:37:56 +0000 (0:00:01.276) 0:00:09.641 *********** 2025-05-13 19:37:57.023330 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:37:57.023630 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:37:57.024875 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:37:57.026396 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:37:57.027713 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:37:57.028102 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:37:57.028964 | orchestrator | 2025-05-13 19:37:57.030107 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-13 19:37:57.030707 | orchestrator | Tuesday 13 May 2025 19:37:57 +0000 (0:00:00.614) 0:00:10.255 *********** 2025-05-13 19:37:57.103544 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:37:57.123544 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:37:57.157090 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:37:57.197663 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:37:57.197843 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:37:57.198676 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:37:57.199085 | orchestrator | 2025-05-13 19:37:57.200375 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-13 19:37:57.200685 | orchestrator | Tuesday 13 May 2025 19:37:57 +0000 (0:00:00.176) 0:00:10.432 *********** 2025-05-13 19:37:57.891963 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-13 19:37:57.892058 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-13 19:37:57.893946 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:37:57.895886 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:37:57.896730 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-13 19:37:57.897503 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:37:57.898378 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-13 19:37:57.899079 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:37:57.901039 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-13 19:37:57.901715 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-13 19:37:57.902757 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:37:57.903354 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:37:57.904225 | orchestrator | 2025-05-13 19:37:57.904668 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-13 19:37:57.905473 | orchestrator | Tuesday 13 May 2025 19:37:57 +0000 (0:00:00.690) 0:00:11.122 *********** 2025-05-13 19:37:57.942897 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:37:57.963437 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:37:58.010588 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:37:58.055986 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:37:58.056952 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:37:58.061005 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:37:58.061094 | orchestrator | 2025-05-13 19:37:58.061110 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-13 19:37:58.061250 | orchestrator | Tuesday 13 May 2025 19:37:58 +0000 (0:00:00.166) 0:00:11.289 *********** 2025-05-13 19:37:58.116782 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:37:58.138357 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:37:58.189715 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:37:58.217724 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:37:58.220111 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:37:58.221269 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:37:58.221303 | orchestrator | 2025-05-13 19:37:58.221955 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-13 19:37:58.222679 | orchestrator | Tuesday 13 May 2025 19:37:58 +0000 (0:00:00.162) 0:00:11.451 *********** 2025-05-13 19:37:58.292721 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:37:58.316874 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:37:58.336867 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:37:58.381334 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:37:58.382150 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:37:58.383060 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:37:58.383503 | orchestrator | 2025-05-13 19:37:58.384279 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-13 19:37:58.385024 | orchestrator | Tuesday 13 May 2025 19:37:58 +0000 (0:00:00.162) 0:00:11.614 *********** 2025-05-13 19:37:59.010287 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:37:59.010652 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:37:59.011975 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:37:59.013540 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:37:59.013959 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:37:59.014730 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:37:59.015307 | orchestrator | 2025-05-13 19:37:59.016236 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-13 19:37:59.016519 | orchestrator | Tuesday 13 May 2025 19:37:59 +0000 (0:00:00.626) 0:00:12.241 *********** 2025-05-13 19:37:59.109862 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:37:59.139795 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:37:59.236693 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:37:59.238512 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:37:59.239851 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:37:59.241323 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:37:59.242706 | orchestrator | 2025-05-13 19:37:59.244004 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:37:59.244494 | orchestrator | 2025-05-13 19:37:59 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:37:59.245242 | orchestrator | 2025-05-13 19:37:59 | INFO  | Please wait and do not abort execution. 2025-05-13 19:37:59.246657 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 19:37:59.247582 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 19:37:59.248912 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 19:37:59.250200 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 19:37:59.251526 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 19:37:59.251983 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 19:37:59.252902 | orchestrator | 2025-05-13 19:37:59.253885 | orchestrator | 2025-05-13 19:37:59.254815 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 19:37:59.255340 | orchestrator | Tuesday 13 May 2025 19:37:59 +0000 (0:00:00.228) 0:00:12.469 *********** 2025-05-13 19:37:59.256297 | orchestrator | =============================================================================== 2025-05-13 19:37:59.256845 | orchestrator | Gathering Facts --------------------------------------------------------- 3.21s 2025-05-13 19:37:59.257757 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.29s 2025-05-13 19:37:59.258599 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.28s 2025-05-13 19:37:59.259419 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.16s 2025-05-13 19:37:59.259898 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.83s 2025-05-13 19:37:59.260964 | orchestrator | Do not require tty for all users ---------------------------------------- 0.79s 2025-05-13 19:37:59.261587 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.69s 2025-05-13 19:37:59.261881 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.63s 2025-05-13 19:37:59.262620 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.61s 2025-05-13 19:37:59.263112 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.59s 2025-05-13 19:37:59.263828 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2025-05-13 19:37:59.264370 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2025-05-13 19:37:59.264766 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2025-05-13 19:37:59.265216 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2025-05-13 19:37:59.265590 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-05-13 19:37:59.266251 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2025-05-13 19:37:59.266603 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2025-05-13 19:37:59.701680 | orchestrator | + osism apply --environment custom facts 2025-05-13 19:38:01.356264 | orchestrator | 2025-05-13 19:38:01 | INFO  | Trying to run play facts in environment custom 2025-05-13 19:38:01.415584 | orchestrator | 2025-05-13 19:38:01 | INFO  | Task 27f7756b-c066-4cf2-8624-7724916e04d8 (facts) was prepared for execution. 2025-05-13 19:38:01.415690 | orchestrator | 2025-05-13 19:38:01 | INFO  | It takes a moment until task 27f7756b-c066-4cf2-8624-7724916e04d8 (facts) has been started and output is visible here. 2025-05-13 19:38:05.336898 | orchestrator | 2025-05-13 19:38:05.336996 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-05-13 19:38:05.338069 | orchestrator | 2025-05-13 19:38:05.339867 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-13 19:38:05.339906 | orchestrator | Tuesday 13 May 2025 19:38:05 +0000 (0:00:00.101) 0:00:00.101 *********** 2025-05-13 19:38:06.767103 | orchestrator | ok: [testbed-manager] 2025-05-13 19:38:06.767242 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:38:06.769876 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:38:06.771610 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:38:06.773351 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:38:06.774438 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:38:06.775884 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:38:06.777145 | orchestrator | 2025-05-13 19:38:06.778572 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-05-13 19:38:06.779128 | orchestrator | Tuesday 13 May 2025 19:38:06 +0000 (0:00:01.431) 0:00:01.533 *********** 2025-05-13 19:38:07.974566 | orchestrator | ok: [testbed-manager] 2025-05-13 19:38:07.975537 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:38:07.976520 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:38:07.977115 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:38:07.978396 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:38:07.979528 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:38:07.979577 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:38:07.979790 | orchestrator | 2025-05-13 19:38:07.980901 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-05-13 19:38:07.981160 | orchestrator | 2025-05-13 19:38:07.981770 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-13 19:38:07.982403 | orchestrator | Tuesday 13 May 2025 19:38:07 +0000 (0:00:01.212) 0:00:02.745 *********** 2025-05-13 19:38:08.097250 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:38:08.097479 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:38:08.098400 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:38:08.099348 | orchestrator | 2025-05-13 19:38:08.099923 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-13 19:38:08.100814 | orchestrator | Tuesday 13 May 2025 19:38:08 +0000 (0:00:00.121) 0:00:02.867 *********** 2025-05-13 19:38:08.295481 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:38:08.296300 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:38:08.297169 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:38:08.298232 | orchestrator | 2025-05-13 19:38:08.299258 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-13 19:38:08.300207 | orchestrator | Tuesday 13 May 2025 19:38:08 +0000 (0:00:00.198) 0:00:03.066 *********** 2025-05-13 19:38:08.483238 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:38:08.483365 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:38:08.483375 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:38:08.483562 | orchestrator | 2025-05-13 19:38:08.483903 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-13 19:38:08.484149 | orchestrator | Tuesday 13 May 2025 19:38:08 +0000 (0:00:00.188) 0:00:03.254 *********** 2025-05-13 19:38:08.639749 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:38:08.639866 | orchestrator | 2025-05-13 19:38:08.644135 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-13 19:38:08.644175 | orchestrator | Tuesday 13 May 2025 19:38:08 +0000 (0:00:00.155) 0:00:03.409 *********** 2025-05-13 19:38:09.138350 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:38:09.138544 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:38:09.139586 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:38:09.140473 | orchestrator | 2025-05-13 19:38:09.140898 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-13 19:38:09.141585 | orchestrator | Tuesday 13 May 2025 19:38:09 +0000 (0:00:00.499) 0:00:03.908 *********** 2025-05-13 19:38:09.262382 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:38:09.262963 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:38:09.265541 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:38:09.266129 | orchestrator | 2025-05-13 19:38:09.266896 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-13 19:38:09.267793 | orchestrator | Tuesday 13 May 2025 19:38:09 +0000 (0:00:00.124) 0:00:04.033 *********** 2025-05-13 19:38:10.305871 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:38:10.309657 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:38:10.311534 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:38:10.312075 | orchestrator | 2025-05-13 19:38:10.313902 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-13 19:38:10.315885 | orchestrator | Tuesday 13 May 2025 19:38:10 +0000 (0:00:01.038) 0:00:05.072 *********** 2025-05-13 19:38:10.810164 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:38:10.812700 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:38:10.812755 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:38:10.812767 | orchestrator | 2025-05-13 19:38:10.813352 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-13 19:38:10.814236 | orchestrator | Tuesday 13 May 2025 19:38:10 +0000 (0:00:00.506) 0:00:05.579 *********** 2025-05-13 19:38:11.889400 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:38:11.890513 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:38:11.891637 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:38:11.892531 | orchestrator | 2025-05-13 19:38:11.893129 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-13 19:38:11.893921 | orchestrator | Tuesday 13 May 2025 19:38:11 +0000 (0:00:01.079) 0:00:06.658 *********** 2025-05-13 19:38:25.651480 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:38:25.651622 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:38:25.652018 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:38:25.652054 | orchestrator | 2025-05-13 19:38:25.653589 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-05-13 19:38:25.654336 | orchestrator | Tuesday 13 May 2025 19:38:25 +0000 (0:00:13.758) 0:00:20.417 *********** 2025-05-13 19:38:25.706700 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:38:25.745783 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:38:25.747117 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:38:25.748539 | orchestrator | 2025-05-13 19:38:25.749089 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-05-13 19:38:25.749833 | orchestrator | Tuesday 13 May 2025 19:38:25 +0000 (0:00:00.098) 0:00:20.515 *********** 2025-05-13 19:38:32.780687 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:38:32.782764 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:38:32.784972 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:38:32.786975 | orchestrator | 2025-05-13 19:38:32.787151 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-13 19:38:32.788166 | orchestrator | Tuesday 13 May 2025 19:38:32 +0000 (0:00:07.026) 0:00:27.542 *********** 2025-05-13 19:38:33.198564 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:38:33.199829 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:38:33.200611 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:38:33.201397 | orchestrator | 2025-05-13 19:38:33.201923 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-13 19:38:33.202614 | orchestrator | Tuesday 13 May 2025 19:38:33 +0000 (0:00:00.426) 0:00:27.968 *********** 2025-05-13 19:38:36.685435 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-05-13 19:38:36.685555 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-05-13 19:38:36.685640 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-05-13 19:38:36.686117 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-05-13 19:38:36.687070 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-05-13 19:38:36.687099 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-05-13 19:38:36.687965 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-05-13 19:38:36.688424 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-05-13 19:38:36.688871 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-05-13 19:38:36.689307 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-05-13 19:38:36.689926 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-05-13 19:38:36.690521 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-05-13 19:38:36.690703 | orchestrator | 2025-05-13 19:38:36.691382 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-13 19:38:36.692652 | orchestrator | Tuesday 13 May 2025 19:38:36 +0000 (0:00:03.484) 0:00:31.453 *********** 2025-05-13 19:38:37.830903 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:38:37.832427 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:38:37.832456 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:38:37.833545 | orchestrator | 2025-05-13 19:38:37.834611 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-13 19:38:37.834943 | orchestrator | 2025-05-13 19:38:37.835595 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-13 19:38:37.836293 | orchestrator | Tuesday 13 May 2025 19:38:37 +0000 (0:00:01.146) 0:00:32.600 *********** 2025-05-13 19:38:42.546699 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:38:42.546968 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:38:42.547741 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:38:42.549250 | orchestrator | ok: [testbed-manager] 2025-05-13 19:38:42.549627 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:38:42.552063 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:38:42.553273 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:38:42.554146 | orchestrator | 2025-05-13 19:38:42.555590 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:38:42.555637 | orchestrator | 2025-05-13 19:38:42 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:38:42.555653 | orchestrator | 2025-05-13 19:38:42 | INFO  | Please wait and do not abort execution. 2025-05-13 19:38:42.556250 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:38:42.556951 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:38:42.557084 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:38:42.557585 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:38:42.558152 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:38:42.558865 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:38:42.559419 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:38:42.559891 | orchestrator | 2025-05-13 19:38:42.560398 | orchestrator | 2025-05-13 19:38:42.561074 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 19:38:42.561801 | orchestrator | Tuesday 13 May 2025 19:38:42 +0000 (0:00:04.715) 0:00:37.316 *********** 2025-05-13 19:38:42.563069 | orchestrator | =============================================================================== 2025-05-13 19:38:42.563771 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.76s 2025-05-13 19:38:42.564157 | orchestrator | Install required packages (Debian) -------------------------------------- 7.03s 2025-05-13 19:38:42.564754 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.72s 2025-05-13 19:38:42.564867 | orchestrator | Copy fact files --------------------------------------------------------- 3.48s 2025-05-13 19:38:42.565705 | orchestrator | Create custom facts directory ------------------------------------------- 1.43s 2025-05-13 19:38:42.565800 | orchestrator | Copy fact file ---------------------------------------------------------- 1.21s 2025-05-13 19:38:42.566291 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.15s 2025-05-13 19:38:42.566639 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.08s 2025-05-13 19:38:42.567027 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.04s 2025-05-13 19:38:42.567436 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.51s 2025-05-13 19:38:42.567762 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.50s 2025-05-13 19:38:42.568671 | orchestrator | Create custom facts directory ------------------------------------------- 0.43s 2025-05-13 19:38:42.569358 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2025-05-13 19:38:42.570130 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.19s 2025-05-13 19:38:42.570781 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2025-05-13 19:38:42.571344 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2025-05-13 19:38:42.572019 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-05-13 19:38:42.572559 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-05-13 19:38:43.042088 | orchestrator | + osism apply bootstrap 2025-05-13 19:38:44.777901 | orchestrator | 2025-05-13 19:38:44 | INFO  | Task 08881e65-985b-4f8b-9d08-27a20e1cad2d (bootstrap) was prepared for execution. 2025-05-13 19:38:44.777979 | orchestrator | 2025-05-13 19:38:44 | INFO  | It takes a moment until task 08881e65-985b-4f8b-9d08-27a20e1cad2d (bootstrap) has been started and output is visible here. 2025-05-13 19:38:48.897702 | orchestrator | 2025-05-13 19:38:48.897916 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-05-13 19:38:48.902153 | orchestrator | 2025-05-13 19:38:48.902269 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-05-13 19:38:48.902287 | orchestrator | Tuesday 13 May 2025 19:38:48 +0000 (0:00:00.169) 0:00:00.169 *********** 2025-05-13 19:38:48.993119 | orchestrator | ok: [testbed-manager] 2025-05-13 19:38:49.020784 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:38:49.046656 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:38:49.074899 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:38:49.187505 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:38:49.187827 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:38:49.188434 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:38:49.189165 | orchestrator | 2025-05-13 19:38:49.190183 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-13 19:38:49.190609 | orchestrator | 2025-05-13 19:38:49.191311 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-13 19:38:49.192037 | orchestrator | Tuesday 13 May 2025 19:38:49 +0000 (0:00:00.292) 0:00:00.461 *********** 2025-05-13 19:38:52.951733 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:38:52.952552 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:38:52.953674 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:38:52.954930 | orchestrator | ok: [testbed-manager] 2025-05-13 19:38:52.955563 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:38:52.956543 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:38:52.957400 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:38:52.958736 | orchestrator | 2025-05-13 19:38:52.959442 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-05-13 19:38:52.960329 | orchestrator | 2025-05-13 19:38:52.961233 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-13 19:38:52.962165 | orchestrator | Tuesday 13 May 2025 19:38:52 +0000 (0:00:03.764) 0:00:04.225 *********** 2025-05-13 19:38:53.031066 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-13 19:38:53.077516 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-13 19:38:53.077612 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-05-13 19:38:53.077687 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-13 19:38:53.080791 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-13 19:38:53.081139 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-13 19:38:53.123919 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-13 19:38:53.124265 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-05-13 19:38:53.124604 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-13 19:38:53.125061 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-13 19:38:53.125522 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-05-13 19:38:53.125878 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-13 19:38:53.126350 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-13 19:38:53.126788 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-13 19:38:53.161865 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-13 19:38:53.162153 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-13 19:38:53.162725 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-13 19:38:53.163737 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-13 19:38:53.163779 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-13 19:38:53.163807 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-13 19:38:53.422506 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:38:53.424812 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:38:53.427942 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-13 19:38:53.429190 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-13 19:38:53.430342 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-05-13 19:38:53.431724 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-13 19:38:53.432792 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-13 19:38:53.433699 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-05-13 19:38:53.434731 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-13 19:38:53.435667 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-13 19:38:53.439518 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-13 19:38:53.439571 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-13 19:38:53.439584 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-13 19:38:53.439595 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-13 19:38:53.439613 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:38:53.439631 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:38:53.439643 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-13 19:38:53.440298 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-13 19:38:53.440952 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-05-13 19:38:53.441784 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-13 19:38:53.443192 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-13 19:38:53.444009 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-13 19:38:53.448740 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-13 19:38:53.448765 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 19:38:53.448778 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-13 19:38:53.448789 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 19:38:53.448801 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-13 19:38:53.448812 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-13 19:38:53.449253 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 19:38:53.450156 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-13 19:38:53.450687 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:38:53.451598 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:38:53.452325 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-13 19:38:53.453235 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-13 19:38:53.454218 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-13 19:38:53.454617 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:38:53.455771 | orchestrator | 2025-05-13 19:38:53.456249 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-05-13 19:38:53.457027 | orchestrator | 2025-05-13 19:38:53.457548 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-05-13 19:38:53.458513 | orchestrator | Tuesday 13 May 2025 19:38:53 +0000 (0:00:00.471) 0:00:04.696 *********** 2025-05-13 19:38:54.682979 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:38:54.684089 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:38:54.685680 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:38:54.689861 | orchestrator | ok: [testbed-manager] 2025-05-13 19:38:54.689909 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:38:54.689921 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:38:54.689933 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:38:54.689993 | orchestrator | 2025-05-13 19:38:54.690658 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-05-13 19:38:54.691599 | orchestrator | Tuesday 13 May 2025 19:38:54 +0000 (0:00:01.260) 0:00:05.956 *********** 2025-05-13 19:38:55.953674 | orchestrator | ok: [testbed-manager] 2025-05-13 19:38:55.956640 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:38:55.957248 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:38:55.958589 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:38:55.959438 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:38:55.960566 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:38:55.961001 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:38:55.961714 | orchestrator | 2025-05-13 19:38:55.962177 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-05-13 19:38:55.962601 | orchestrator | Tuesday 13 May 2025 19:38:55 +0000 (0:00:01.269) 0:00:07.226 *********** 2025-05-13 19:38:56.216726 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:38:56.221071 | orchestrator | 2025-05-13 19:38:56.221116 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-05-13 19:38:56.221131 | orchestrator | Tuesday 13 May 2025 19:38:56 +0000 (0:00:00.262) 0:00:07.488 *********** 2025-05-13 19:38:58.261062 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:38:58.263002 | orchestrator | changed: [testbed-manager] 2025-05-13 19:38:58.263058 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:38:58.265046 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:38:58.267913 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:38:58.269953 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:38:58.270939 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:38:58.271871 | orchestrator | 2025-05-13 19:38:58.272631 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-05-13 19:38:58.273358 | orchestrator | Tuesday 13 May 2025 19:38:58 +0000 (0:00:02.044) 0:00:09.533 *********** 2025-05-13 19:38:58.330600 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:38:58.557391 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:38:58.558361 | orchestrator | 2025-05-13 19:38:58.559241 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-05-13 19:38:58.560244 | orchestrator | Tuesday 13 May 2025 19:38:58 +0000 (0:00:00.297) 0:00:09.831 *********** 2025-05-13 19:38:59.561423 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:38:59.562242 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:38:59.562340 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:38:59.562639 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:38:59.563348 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:38:59.563925 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:38:59.564400 | orchestrator | 2025-05-13 19:38:59.565193 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-05-13 19:38:59.565600 | orchestrator | Tuesday 13 May 2025 19:38:59 +0000 (0:00:01.002) 0:00:10.833 *********** 2025-05-13 19:38:59.633478 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:39:00.149726 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:39:00.150317 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:39:00.150354 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:39:00.150794 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:39:00.151423 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:39:00.152248 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:39:00.152821 | orchestrator | 2025-05-13 19:39:00.154534 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-05-13 19:39:00.154594 | orchestrator | Tuesday 13 May 2025 19:39:00 +0000 (0:00:00.590) 0:00:11.423 *********** 2025-05-13 19:39:00.246966 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:39:00.275667 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:39:00.312093 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:39:00.560075 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:39:00.560180 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:39:00.560664 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:39:00.560927 | orchestrator | ok: [testbed-manager] 2025-05-13 19:39:00.561561 | orchestrator | 2025-05-13 19:39:00.562173 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-13 19:39:00.562653 | orchestrator | Tuesday 13 May 2025 19:39:00 +0000 (0:00:00.411) 0:00:11.835 *********** 2025-05-13 19:39:00.639296 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:39:00.663999 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:39:00.685454 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:39:00.710358 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:39:00.775747 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:39:00.776083 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:39:00.777645 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:39:00.780987 | orchestrator | 2025-05-13 19:39:00.781731 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-13 19:39:00.783056 | orchestrator | Tuesday 13 May 2025 19:39:00 +0000 (0:00:00.215) 0:00:12.050 *********** 2025-05-13 19:39:01.049994 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:39:01.050786 | orchestrator | 2025-05-13 19:39:01.051700 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-13 19:39:01.054905 | orchestrator | Tuesday 13 May 2025 19:39:01 +0000 (0:00:00.274) 0:00:12.324 *********** 2025-05-13 19:39:01.343091 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:39:01.343409 | orchestrator | 2025-05-13 19:39:01.344279 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-13 19:39:01.346961 | orchestrator | Tuesday 13 May 2025 19:39:01 +0000 (0:00:00.290) 0:00:12.615 *********** 2025-05-13 19:39:02.777135 | orchestrator | ok: [testbed-manager] 2025-05-13 19:39:02.779152 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:39:02.779239 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:39:02.779819 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:39:02.780188 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:39:02.783823 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:39:02.783858 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:39:02.784515 | orchestrator | 2025-05-13 19:39:02.785404 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-13 19:39:02.787227 | orchestrator | Tuesday 13 May 2025 19:39:02 +0000 (0:00:01.434) 0:00:14.049 *********** 2025-05-13 19:39:02.884380 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:39:02.913418 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:39:02.938388 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:39:02.964459 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:39:03.028737 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:39:03.030410 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:39:03.030914 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:39:03.030937 | orchestrator | 2025-05-13 19:39:03.031969 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-13 19:39:03.033668 | orchestrator | Tuesday 13 May 2025 19:39:03 +0000 (0:00:00.250) 0:00:14.300 *********** 2025-05-13 19:39:03.636080 | orchestrator | ok: [testbed-manager] 2025-05-13 19:39:03.637288 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:39:03.638258 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:39:03.638953 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:39:03.640231 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:39:03.641422 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:39:03.642676 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:39:03.643465 | orchestrator | 2025-05-13 19:39:03.644291 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-13 19:39:03.645009 | orchestrator | Tuesday 13 May 2025 19:39:03 +0000 (0:00:00.606) 0:00:14.907 *********** 2025-05-13 19:39:03.751106 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:39:03.779415 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:39:03.824283 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:39:03.931574 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:39:03.932104 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:39:03.933226 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:39:03.934538 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:39:03.935487 | orchestrator | 2025-05-13 19:39:03.936137 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-13 19:39:03.937466 | orchestrator | Tuesday 13 May 2025 19:39:03 +0000 (0:00:00.298) 0:00:15.206 *********** 2025-05-13 19:39:04.509336 | orchestrator | ok: [testbed-manager] 2025-05-13 19:39:04.509939 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:39:04.511437 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:39:04.511979 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:39:04.512854 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:39:04.513583 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:39:04.514192 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:39:04.515098 | orchestrator | 2025-05-13 19:39:04.515651 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-13 19:39:04.516308 | orchestrator | Tuesday 13 May 2025 19:39:04 +0000 (0:00:00.576) 0:00:15.782 *********** 2025-05-13 19:39:05.622128 | orchestrator | ok: [testbed-manager] 2025-05-13 19:39:05.623314 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:39:05.623774 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:39:05.625021 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:39:05.625841 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:39:05.626695 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:39:05.627294 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:39:05.628054 | orchestrator | 2025-05-13 19:39:05.628928 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-13 19:39:05.629487 | orchestrator | Tuesday 13 May 2025 19:39:05 +0000 (0:00:01.111) 0:00:16.894 *********** 2025-05-13 19:39:06.679046 | orchestrator | ok: [testbed-manager] 2025-05-13 19:39:06.680019 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:39:06.680113 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:39:06.681164 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:39:06.682912 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:39:06.683040 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:39:06.683919 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:39:06.684587 | orchestrator | 2025-05-13 19:39:06.685692 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-13 19:39:06.685756 | orchestrator | Tuesday 13 May 2025 19:39:06 +0000 (0:00:01.056) 0:00:17.951 *********** 2025-05-13 19:39:07.014611 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:39:07.017168 | orchestrator | 2025-05-13 19:39:07.019000 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-13 19:39:07.019038 | orchestrator | Tuesday 13 May 2025 19:39:07 +0000 (0:00:00.336) 0:00:18.287 *********** 2025-05-13 19:39:07.094315 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:39:08.410889 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:39:08.411000 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:39:08.411075 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:39:08.412282 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:39:08.412839 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:39:08.413437 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:39:08.414329 | orchestrator | 2025-05-13 19:39:08.415132 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-13 19:39:08.415586 | orchestrator | Tuesday 13 May 2025 19:39:08 +0000 (0:00:01.395) 0:00:19.683 *********** 2025-05-13 19:39:08.514622 | orchestrator | ok: [testbed-manager] 2025-05-13 19:39:08.533282 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:39:08.562826 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:39:08.638787 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:39:08.640437 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:39:08.641325 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:39:08.642591 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:39:08.643537 | orchestrator | 2025-05-13 19:39:08.645533 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-13 19:39:08.646331 | orchestrator | Tuesday 13 May 2025 19:39:08 +0000 (0:00:00.228) 0:00:19.911 *********** 2025-05-13 19:39:08.714292 | orchestrator | ok: [testbed-manager] 2025-05-13 19:39:08.740678 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:39:08.764492 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:39:08.792404 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:39:08.862736 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:39:08.863625 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:39:08.865875 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:39:08.866840 | orchestrator | 2025-05-13 19:39:08.870128 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-13 19:39:08.870303 | orchestrator | Tuesday 13 May 2025 19:39:08 +0000 (0:00:00.225) 0:00:20.137 *********** 2025-05-13 19:39:08.973771 | orchestrator | ok: [testbed-manager] 2025-05-13 19:39:08.998166 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:39:09.027665 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:39:09.092107 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:39:09.093167 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:39:09.093425 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:39:09.093773 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:39:09.094669 | orchestrator | 2025-05-13 19:39:09.095454 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-13 19:39:09.095897 | orchestrator | Tuesday 13 May 2025 19:39:09 +0000 (0:00:00.229) 0:00:20.366 *********** 2025-05-13 19:39:09.392130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:39:09.392292 | orchestrator | 2025-05-13 19:39:09.393346 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-13 19:39:09.394578 | orchestrator | Tuesday 13 May 2025 19:39:09 +0000 (0:00:00.297) 0:00:20.664 *********** 2025-05-13 19:39:09.913641 | orchestrator | ok: [testbed-manager] 2025-05-13 19:39:09.914280 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:39:09.915848 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:39:09.915922 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:39:09.917009 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:39:09.918255 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:39:09.918642 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:39:09.919697 | orchestrator | 2025-05-13 19:39:09.919963 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-13 19:39:09.920560 | orchestrator | Tuesday 13 May 2025 19:39:09 +0000 (0:00:00.521) 0:00:21.186 *********** 2025-05-13 19:39:10.002129 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:39:10.034255 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:39:10.063355 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:39:10.087033 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:39:10.155742 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:39:10.158382 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:39:10.158926 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:39:10.160308 | orchestrator | 2025-05-13 19:39:10.161005 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-13 19:39:10.161994 | orchestrator | Tuesday 13 May 2025 19:39:10 +0000 (0:00:00.243) 0:00:21.429 *********** 2025-05-13 19:39:11.234493 | orchestrator | ok: [testbed-manager] 2025-05-13 19:39:11.234724 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:39:11.234825 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:39:11.236067 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:39:11.237000 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:39:11.237902 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:39:11.238550 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:39:11.239509 | orchestrator | 2025-05-13 19:39:11.239974 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-13 19:39:11.240938 | orchestrator | Tuesday 13 May 2025 19:39:11 +0000 (0:00:01.078) 0:00:22.508 *********** 2025-05-13 19:39:11.812098 | orchestrator | ok: [testbed-manager] 2025-05-13 19:39:11.812948 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:39:11.815457 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:39:11.816067 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:39:11.816682 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:39:11.817414 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:39:11.818177 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:39:11.818939 | orchestrator | 2025-05-13 19:39:11.819831 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-13 19:39:11.820170 | orchestrator | Tuesday 13 May 2025 19:39:11 +0000 (0:00:00.576) 0:00:23.085 *********** 2025-05-13 19:39:12.867702 | orchestrator | ok: [testbed-manager] 2025-05-13 19:39:12.872600 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:39:12.872649 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:39:12.872662 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:39:12.872674 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:39:12.872732 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:39:12.873093 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:39:12.873637 | orchestrator | 2025-05-13 19:39:12.874321 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-13 19:39:12.875242 | orchestrator | Tuesday 13 May 2025 19:39:12 +0000 (0:00:01.053) 0:00:24.138 *********** 2025-05-13 19:39:27.138519 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:39:27.138679 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:39:27.138714 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:39:27.138741 | orchestrator | changed: [testbed-manager] 2025-05-13 19:39:27.140285 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:39:27.141380 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:39:27.142313 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:39:27.142780 | orchestrator | 2025-05-13 19:39:27.143605 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-05-13 19:39:27.144086 | orchestrator | Tuesday 13 May 2025 19:39:27 +0000 (0:00:14.266) 0:00:38.405 *********** 2025-05-13 19:39:27.211074 | orchestrator | ok: [testbed-manager] 2025-05-13 19:39:27.240103 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:39:27.266382 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:39:27.293967 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:39:27.349947 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:39:27.350101 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:39:27.350496 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:39:27.354176 | orchestrator | 2025-05-13 19:39:27.354306 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-05-13 19:39:27.354321 | orchestrator | Tuesday 13 May 2025 19:39:27 +0000 (0:00:00.217) 0:00:38.623 *********** 2025-05-13 19:39:27.427748 | orchestrator | ok: [testbed-manager] 2025-05-13 19:39:27.458985 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:39:27.486698 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:39:27.514690 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:39:27.572281 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:39:27.573412 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:39:27.573668 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:39:27.574079 | orchestrator | 2025-05-13 19:39:27.574378 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-05-13 19:39:27.574899 | orchestrator | Tuesday 13 May 2025 19:39:27 +0000 (0:00:00.223) 0:00:38.846 *********** 2025-05-13 19:39:27.658378 | orchestrator | ok: [testbed-manager] 2025-05-13 19:39:27.695578 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:39:27.728280 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:39:27.753555 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:39:27.822437 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:39:27.822686 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:39:27.823416 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:39:27.825338 | orchestrator | 2025-05-13 19:39:27.826845 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-05-13 19:39:27.827143 | orchestrator | Tuesday 13 May 2025 19:39:27 +0000 (0:00:00.250) 0:00:39.096 *********** 2025-05-13 19:39:28.139864 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:39:28.141984 | orchestrator | 2025-05-13 19:39:28.144955 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-05-13 19:39:28.146528 | orchestrator | Tuesday 13 May 2025 19:39:28 +0000 (0:00:00.316) 0:00:39.413 *********** 2025-05-13 19:39:29.845793 | orchestrator | ok: [testbed-manager] 2025-05-13 19:39:29.846810 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:39:29.847582 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:39:29.849568 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:39:29.850181 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:39:29.851233 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:39:29.851790 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:39:29.852326 | orchestrator | 2025-05-13 19:39:29.853152 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-05-13 19:39:29.853721 | orchestrator | Tuesday 13 May 2025 19:39:29 +0000 (0:00:01.704) 0:00:41.117 *********** 2025-05-13 19:39:30.890889 | orchestrator | changed: [testbed-manager] 2025-05-13 19:39:30.892206 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:39:30.893355 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:39:30.894552 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:39:30.896461 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:39:30.898987 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:39:30.899091 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:39:30.899108 | orchestrator | 2025-05-13 19:39:30.899171 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-05-13 19:39:30.899900 | orchestrator | Tuesday 13 May 2025 19:39:30 +0000 (0:00:01.046) 0:00:42.164 *********** 2025-05-13 19:39:31.727740 | orchestrator | ok: [testbed-manager] 2025-05-13 19:39:31.729040 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:39:31.729086 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:39:31.729145 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:39:31.729610 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:39:31.730757 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:39:31.731564 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:39:31.732135 | orchestrator | 2025-05-13 19:39:31.733125 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-05-13 19:39:31.733551 | orchestrator | Tuesday 13 May 2025 19:39:31 +0000 (0:00:00.836) 0:00:43.000 *********** 2025-05-13 19:39:32.038901 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:39:32.040590 | orchestrator | 2025-05-13 19:39:32.042501 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-05-13 19:39:32.042738 | orchestrator | Tuesday 13 May 2025 19:39:32 +0000 (0:00:00.312) 0:00:43.312 *********** 2025-05-13 19:39:33.075490 | orchestrator | changed: [testbed-manager] 2025-05-13 19:39:33.077006 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:39:33.079702 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:39:33.082259 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:39:33.088327 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:39:33.090765 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:39:33.091250 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:39:33.095559 | orchestrator | 2025-05-13 19:39:33.096227 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-05-13 19:39:33.096698 | orchestrator | Tuesday 13 May 2025 19:39:33 +0000 (0:00:01.034) 0:00:44.347 *********** 2025-05-13 19:39:33.171204 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:39:33.199562 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:39:33.225151 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:39:33.254141 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:39:33.386080 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:39:33.392247 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:39:33.393156 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:39:33.393778 | orchestrator | 2025-05-13 19:39:33.394564 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-05-13 19:39:33.395245 | orchestrator | Tuesday 13 May 2025 19:39:33 +0000 (0:00:00.310) 0:00:44.658 *********** 2025-05-13 19:39:45.140945 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:39:45.141078 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:39:45.141091 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:39:45.141099 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:39:45.141107 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:39:45.141162 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:39:45.142969 | orchestrator | changed: [testbed-manager] 2025-05-13 19:39:45.144124 | orchestrator | 2025-05-13 19:39:45.145209 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-05-13 19:39:45.146350 | orchestrator | Tuesday 13 May 2025 19:39:45 +0000 (0:00:11.747) 0:00:56.405 *********** 2025-05-13 19:39:46.664460 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:39:46.665170 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:39:46.668931 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:39:46.668991 | orchestrator | ok: [testbed-manager] 2025-05-13 19:39:46.669013 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:39:46.669033 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:39:46.669804 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:39:46.670519 | orchestrator | 2025-05-13 19:39:46.671296 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-05-13 19:39:46.672163 | orchestrator | Tuesday 13 May 2025 19:39:46 +0000 (0:00:01.532) 0:00:57.938 *********** 2025-05-13 19:39:47.536531 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:39:47.537978 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:39:47.539802 | orchestrator | ok: [testbed-manager] 2025-05-13 19:39:47.540615 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:39:47.541824 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:39:47.543747 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:39:47.544287 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:39:47.544664 | orchestrator | 2025-05-13 19:39:47.545994 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-05-13 19:39:47.546859 | orchestrator | Tuesday 13 May 2025 19:39:47 +0000 (0:00:00.870) 0:00:58.808 *********** 2025-05-13 19:39:47.615699 | orchestrator | ok: [testbed-manager] 2025-05-13 19:39:47.653416 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:39:47.676848 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:39:47.714142 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:39:47.787560 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:39:47.788674 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:39:47.788983 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:39:47.790193 | orchestrator | 2025-05-13 19:39:47.790419 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-05-13 19:39:47.790922 | orchestrator | Tuesday 13 May 2025 19:39:47 +0000 (0:00:00.253) 0:00:59.061 *********** 2025-05-13 19:39:47.864140 | orchestrator | ok: [testbed-manager] 2025-05-13 19:39:47.893162 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:39:47.916755 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:39:47.945904 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:39:48.003050 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:39:48.003127 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:39:48.003403 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:39:48.003756 | orchestrator | 2025-05-13 19:39:48.004513 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-05-13 19:39:48.004798 | orchestrator | Tuesday 13 May 2025 19:39:47 +0000 (0:00:00.215) 0:00:59.277 *********** 2025-05-13 19:39:48.274755 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:39:48.277772 | orchestrator | 2025-05-13 19:39:48.277815 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-05-13 19:39:48.277836 | orchestrator | Tuesday 13 May 2025 19:39:48 +0000 (0:00:00.269) 0:00:59.546 *********** 2025-05-13 19:39:49.709333 | orchestrator | ok: [testbed-manager] 2025-05-13 19:39:49.712607 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:39:49.712628 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:39:49.712633 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:39:49.713207 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:39:49.713855 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:39:49.714791 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:39:49.715788 | orchestrator | 2025-05-13 19:39:49.716331 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-05-13 19:39:49.716847 | orchestrator | Tuesday 13 May 2025 19:39:49 +0000 (0:00:01.435) 0:01:00.982 *********** 2025-05-13 19:39:50.363823 | orchestrator | changed: [testbed-manager] 2025-05-13 19:39:50.363900 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:39:50.366802 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:39:50.369967 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:39:50.370802 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:39:50.372130 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:39:50.373674 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:39:50.374486 | orchestrator | 2025-05-13 19:39:50.375043 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-05-13 19:39:50.376458 | orchestrator | Tuesday 13 May 2025 19:39:50 +0000 (0:00:00.653) 0:01:01.635 *********** 2025-05-13 19:39:50.448429 | orchestrator | ok: [testbed-manager] 2025-05-13 19:39:50.477021 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:39:50.510999 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:39:50.534364 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:39:50.603966 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:39:50.604101 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:39:50.605592 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:39:50.606176 | orchestrator | 2025-05-13 19:39:50.606575 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-05-13 19:39:50.606946 | orchestrator | Tuesday 13 May 2025 19:39:50 +0000 (0:00:00.243) 0:01:01.879 *********** 2025-05-13 19:39:51.706605 | orchestrator | ok: [testbed-manager] 2025-05-13 19:39:51.707500 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:39:51.708317 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:39:51.709635 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:39:51.710623 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:39:51.711063 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:39:51.711991 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:39:51.712632 | orchestrator | 2025-05-13 19:39:51.713500 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-05-13 19:39:51.714155 | orchestrator | Tuesday 13 May 2025 19:39:51 +0000 (0:00:01.100) 0:01:02.979 *********** 2025-05-13 19:39:53.191929 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:39:53.192867 | orchestrator | changed: [testbed-manager] 2025-05-13 19:39:53.193340 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:39:53.194401 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:39:53.196527 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:39:53.197505 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:39:53.197891 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:39:53.199290 | orchestrator | 2025-05-13 19:39:53.199639 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-05-13 19:39:53.200499 | orchestrator | Tuesday 13 May 2025 19:39:53 +0000 (0:00:01.485) 0:01:04.464 *********** 2025-05-13 19:39:55.504773 | orchestrator | ok: [testbed-manager] 2025-05-13 19:39:55.505899 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:39:55.507586 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:39:55.509377 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:39:55.510247 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:39:55.510647 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:39:55.511655 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:39:55.512472 | orchestrator | 2025-05-13 19:39:55.513115 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-05-13 19:39:55.513835 | orchestrator | Tuesday 13 May 2025 19:39:55 +0000 (0:00:02.312) 0:01:06.777 *********** 2025-05-13 19:40:32.478188 | orchestrator | ok: [testbed-manager] 2025-05-13 19:40:32.478356 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:40:32.480725 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:40:32.480773 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:40:32.483545 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:40:32.483801 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:40:32.484586 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:40:32.487715 | orchestrator | 2025-05-13 19:40:32.487748 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-05-13 19:40:32.487761 | orchestrator | Tuesday 13 May 2025 19:40:32 +0000 (0:00:36.968) 0:01:43.746 *********** 2025-05-13 19:41:48.912562 | orchestrator | changed: [testbed-manager] 2025-05-13 19:41:48.912678 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:41:48.912693 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:41:48.912705 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:41:48.913311 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:41:48.913349 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:41:48.913656 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:41:48.914310 | orchestrator | 2025-05-13 19:41:48.915077 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-05-13 19:41:48.915643 | orchestrator | Tuesday 13 May 2025 19:41:48 +0000 (0:01:16.434) 0:03:00.180 *********** 2025-05-13 19:41:50.494431 | orchestrator | ok: [testbed-manager] 2025-05-13 19:41:50.494588 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:41:50.497372 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:41:50.497797 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:41:50.498481 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:41:50.499052 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:41:50.499772 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:41:50.500370 | orchestrator | 2025-05-13 19:41:50.501018 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-05-13 19:41:50.501630 | orchestrator | Tuesday 13 May 2025 19:41:50 +0000 (0:00:01.586) 0:03:01.766 *********** 2025-05-13 19:42:02.084546 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:42:02.086980 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:42:02.087531 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:42:02.088292 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:42:02.088881 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:42:02.089406 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:42:02.090420 | orchestrator | changed: [testbed-manager] 2025-05-13 19:42:02.090669 | orchestrator | 2025-05-13 19:42:02.091352 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-05-13 19:42:02.091990 | orchestrator | Tuesday 13 May 2025 19:42:02 +0000 (0:00:11.586) 0:03:13.352 *********** 2025-05-13 19:42:02.527370 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-05-13 19:42:02.528246 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-05-13 19:42:02.529398 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-05-13 19:42:02.530398 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-05-13 19:42:02.531577 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-05-13 19:42:02.532299 | orchestrator | 2025-05-13 19:42:02.534000 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-05-13 19:42:02.534290 | orchestrator | Tuesday 13 May 2025 19:42:02 +0000 (0:00:00.443) 0:03:13.795 *********** 2025-05-13 19:42:02.583788 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-13 19:42:02.614184 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:42:02.694345 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-13 19:42:02.694433 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-13 19:42:03.107682 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:42:03.108584 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:42:03.110586 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-13 19:42:03.111701 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:42:03.112507 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-13 19:42:03.113557 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-13 19:42:03.114110 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-13 19:42:03.115167 | orchestrator | 2025-05-13 19:42:03.115680 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-05-13 19:42:03.116520 | orchestrator | Tuesday 13 May 2025 19:42:03 +0000 (0:00:00.583) 0:03:14.379 *********** 2025-05-13 19:42:03.168339 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-13 19:42:03.168402 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-13 19:42:03.168414 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-13 19:42:03.205885 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-13 19:42:03.208218 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-13 19:42:03.209032 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-13 19:42:03.209901 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-13 19:42:03.212240 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-13 19:42:03.254105 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-13 19:42:03.255315 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-13 19:42:03.288772 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:42:03.337790 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-13 19:42:03.338523 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-13 19:42:03.338812 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-13 19:42:03.339234 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-13 19:42:03.339715 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-13 19:42:03.340167 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-13 19:42:03.342531 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-13 19:42:03.351417 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-13 19:42:03.351690 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-13 19:42:06.719310 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-13 19:42:06.720731 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-13 19:42:06.721120 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-13 19:42:06.724279 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-13 19:42:06.725604 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-13 19:42:06.726194 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-13 19:42:06.727221 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:42:06.727802 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-13 19:42:06.728946 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-13 19:42:06.729476 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-13 19:42:06.730666 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-13 19:42:06.731212 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-13 19:42:06.731989 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:42:06.732509 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-13 19:42:06.733081 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-13 19:42:06.733892 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-13 19:42:06.734802 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-13 19:42:06.735171 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-13 19:42:06.736033 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-13 19:42:06.736448 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-13 19:42:06.736953 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-13 19:42:06.738221 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-13 19:42:06.738975 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-13 19:42:06.739732 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:42:06.740626 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-13 19:42:06.741178 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-13 19:42:06.741550 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-13 19:42:06.742056 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-13 19:42:06.742564 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-13 19:42:06.742889 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-13 19:42:06.743469 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-13 19:42:06.744197 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-13 19:42:06.744828 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-13 19:42:06.745196 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-13 19:42:06.748746 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-13 19:42:06.748770 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-13 19:42:06.748781 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-13 19:42:06.748792 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-13 19:42:06.748804 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-13 19:42:06.748815 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-13 19:42:06.748827 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-13 19:42:06.748854 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-13 19:42:06.748865 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-13 19:42:06.748877 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-13 19:42:06.748888 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-13 19:42:06.749063 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-13 19:42:06.749523 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-13 19:42:06.749891 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-13 19:42:06.750132 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-13 19:42:06.750652 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-13 19:42:06.751043 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-13 19:42:06.751134 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-13 19:42:06.751446 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-13 19:42:06.751823 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-13 19:42:06.752300 | orchestrator | 2025-05-13 19:42:06.752392 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-05-13 19:42:06.752848 | orchestrator | Tuesday 13 May 2025 19:42:06 +0000 (0:00:03.611) 0:03:17.990 *********** 2025-05-13 19:42:07.281315 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-13 19:42:07.282063 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-13 19:42:07.282614 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-13 19:42:07.283895 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-13 19:42:07.285023 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-13 19:42:07.285217 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-13 19:42:07.285796 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-13 19:42:07.286378 | orchestrator | 2025-05-13 19:42:07.287380 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-05-13 19:42:07.287621 | orchestrator | Tuesday 13 May 2025 19:42:07 +0000 (0:00:00.565) 0:03:18.556 *********** 2025-05-13 19:42:07.339322 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-13 19:42:07.370149 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:42:07.370534 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-13 19:42:07.371151 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-13 19:42:07.398447 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:42:07.427000 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-13 19:42:07.427369 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:42:07.455626 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:42:07.956566 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-13 19:42:07.957298 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-13 19:42:07.958491 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-13 19:42:07.959200 | orchestrator | 2025-05-13 19:42:07.960199 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-05-13 19:42:07.960839 | orchestrator | Tuesday 13 May 2025 19:42:07 +0000 (0:00:00.672) 0:03:19.228 *********** 2025-05-13 19:42:08.010507 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-13 19:42:08.045058 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-13 19:42:08.046665 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:42:08.078483 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:42:08.078567 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-13 19:42:08.109875 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:42:08.110752 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-13 19:42:08.136169 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:42:08.608729 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-13 19:42:08.609080 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-13 19:42:08.609799 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-13 19:42:08.611323 | orchestrator | 2025-05-13 19:42:08.611978 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-05-13 19:42:08.612871 | orchestrator | Tuesday 13 May 2025 19:42:08 +0000 (0:00:00.653) 0:03:19.882 *********** 2025-05-13 19:42:08.694384 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:42:08.720411 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:42:08.759931 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:42:08.790363 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:42:08.915047 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:42:08.915707 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:42:08.917717 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:42:08.919027 | orchestrator | 2025-05-13 19:42:08.920933 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-05-13 19:42:08.921423 | orchestrator | Tuesday 13 May 2025 19:42:08 +0000 (0:00:00.304) 0:03:20.187 *********** 2025-05-13 19:42:14.496642 | orchestrator | ok: [testbed-manager] 2025-05-13 19:42:14.496825 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:42:14.497506 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:42:14.499178 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:42:14.500586 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:42:14.501706 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:42:14.502692 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:42:14.503154 | orchestrator | 2025-05-13 19:42:14.504371 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-05-13 19:42:14.504819 | orchestrator | Tuesday 13 May 2025 19:42:14 +0000 (0:00:05.582) 0:03:25.770 *********** 2025-05-13 19:42:14.568343 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-05-13 19:42:14.604646 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-05-13 19:42:14.605510 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:42:14.641674 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-05-13 19:42:14.641754 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:42:14.641767 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-05-13 19:42:14.677199 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:42:14.713061 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:42:14.713158 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-05-13 19:42:14.778248 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:42:14.778873 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-05-13 19:42:14.779393 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:42:14.779563 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-05-13 19:42:14.779966 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:42:14.780688 | orchestrator | 2025-05-13 19:42:14.781854 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-05-13 19:42:14.782389 | orchestrator | Tuesday 13 May 2025 19:42:14 +0000 (0:00:00.282) 0:03:26.053 *********** 2025-05-13 19:42:15.806726 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-05-13 19:42:15.809087 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-05-13 19:42:15.809173 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-05-13 19:42:15.809244 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-05-13 19:42:15.809949 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-05-13 19:42:15.810865 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-05-13 19:42:15.811679 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-05-13 19:42:15.812242 | orchestrator | 2025-05-13 19:42:15.813214 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-05-13 19:42:15.813676 | orchestrator | Tuesday 13 May 2025 19:42:15 +0000 (0:00:01.026) 0:03:27.079 *********** 2025-05-13 19:42:16.295548 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:42:16.299014 | orchestrator | 2025-05-13 19:42:16.299057 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-05-13 19:42:16.299071 | orchestrator | Tuesday 13 May 2025 19:42:16 +0000 (0:00:00.488) 0:03:27.567 *********** 2025-05-13 19:42:17.414169 | orchestrator | ok: [testbed-manager] 2025-05-13 19:42:17.414919 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:42:17.415600 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:42:17.416474 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:42:17.417812 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:42:17.418830 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:42:17.419130 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:42:17.420098 | orchestrator | 2025-05-13 19:42:17.420545 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-05-13 19:42:17.421221 | orchestrator | Tuesday 13 May 2025 19:42:17 +0000 (0:00:01.120) 0:03:28.688 *********** 2025-05-13 19:42:18.011563 | orchestrator | ok: [testbed-manager] 2025-05-13 19:42:18.011944 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:42:18.013142 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:42:18.014298 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:42:18.015193 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:42:18.016046 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:42:18.016722 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:42:18.017413 | orchestrator | 2025-05-13 19:42:18.018132 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-05-13 19:42:18.018731 | orchestrator | Tuesday 13 May 2025 19:42:18 +0000 (0:00:00.597) 0:03:29.285 *********** 2025-05-13 19:42:18.620834 | orchestrator | changed: [testbed-manager] 2025-05-13 19:42:18.622200 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:42:18.623368 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:42:18.624333 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:42:18.625455 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:42:18.626105 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:42:18.627200 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:42:18.627731 | orchestrator | 2025-05-13 19:42:18.628832 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-05-13 19:42:18.629347 | orchestrator | Tuesday 13 May 2025 19:42:18 +0000 (0:00:00.608) 0:03:29.894 *********** 2025-05-13 19:42:19.259099 | orchestrator | ok: [testbed-manager] 2025-05-13 19:42:19.259610 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:42:19.260754 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:42:19.262141 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:42:19.262397 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:42:19.263291 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:42:19.263675 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:42:19.264240 | orchestrator | 2025-05-13 19:42:19.264732 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-05-13 19:42:19.265361 | orchestrator | Tuesday 13 May 2025 19:42:19 +0000 (0:00:00.636) 0:03:30.530 *********** 2025-05-13 19:42:20.237897 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747163292.5417588, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 19:42:20.243912 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747163332.9281561, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 19:42:20.244769 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747163328.3627555, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 19:42:20.246196 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747163337.0871744, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 19:42:20.246856 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747163334.4234006, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 19:42:20.248919 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747163323.5527647, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 19:42:20.249006 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747163331.0917985, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 19:42:20.249512 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747163321.7027588, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 19:42:20.250190 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747163247.3354475, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 19:42:20.250579 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747163254.208059, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 19:42:20.251216 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747163254.784313, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 19:42:20.253141 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747163244.168161, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 19:42:20.253880 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747163261.1088371, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 19:42:20.254286 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747163244.4689767, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 19:42:20.255721 | orchestrator | 2025-05-13 19:42:20.256678 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-05-13 19:42:20.258373 | orchestrator | Tuesday 13 May 2025 19:42:20 +0000 (0:00:00.978) 0:03:31.509 *********** 2025-05-13 19:42:21.322171 | orchestrator | changed: [testbed-manager] 2025-05-13 19:42:21.322413 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:42:21.323869 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:42:21.323896 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:42:21.323908 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:42:21.323919 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:42:21.325469 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:42:21.325910 | orchestrator | 2025-05-13 19:42:21.326375 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-05-13 19:42:21.326713 | orchestrator | Tuesday 13 May 2025 19:42:21 +0000 (0:00:01.086) 0:03:32.596 *********** 2025-05-13 19:42:22.632307 | orchestrator | changed: [testbed-manager] 2025-05-13 19:42:22.633978 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:42:22.634010 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:42:22.634077 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:42:22.634488 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:42:22.635124 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:42:22.635853 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:42:22.638483 | orchestrator | 2025-05-13 19:42:22.638634 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-05-13 19:42:22.639481 | orchestrator | Tuesday 13 May 2025 19:42:22 +0000 (0:00:01.309) 0:03:33.905 *********** 2025-05-13 19:42:23.820517 | orchestrator | changed: [testbed-manager] 2025-05-13 19:42:23.820634 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:42:23.821566 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:42:23.826662 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:42:23.826787 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:42:23.827431 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:42:23.830733 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:42:23.831094 | orchestrator | 2025-05-13 19:42:23.831716 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-05-13 19:42:23.835694 | orchestrator | Tuesday 13 May 2025 19:42:23 +0000 (0:00:01.188) 0:03:35.093 *********** 2025-05-13 19:42:23.940750 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:42:23.973614 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:42:24.014157 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:42:24.047245 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:42:24.114403 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:42:24.115627 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:42:24.117012 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:42:24.117898 | orchestrator | 2025-05-13 19:42:24.119540 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-05-13 19:42:24.121162 | orchestrator | Tuesday 13 May 2025 19:42:24 +0000 (0:00:00.294) 0:03:35.388 *********** 2025-05-13 19:42:24.830828 | orchestrator | ok: [testbed-manager] 2025-05-13 19:42:24.834327 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:42:24.834407 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:42:24.834432 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:42:24.835359 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:42:24.835872 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:42:24.836642 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:42:24.837780 | orchestrator | 2025-05-13 19:42:24.838304 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-05-13 19:42:24.839036 | orchestrator | Tuesday 13 May 2025 19:42:24 +0000 (0:00:00.714) 0:03:36.103 *********** 2025-05-13 19:42:25.234650 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:42:25.238079 | orchestrator | 2025-05-13 19:42:25.238124 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-05-13 19:42:25.238138 | orchestrator | Tuesday 13 May 2025 19:42:25 +0000 (0:00:00.403) 0:03:36.506 *********** 2025-05-13 19:42:32.475417 | orchestrator | ok: [testbed-manager] 2025-05-13 19:42:32.476063 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:42:32.477561 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:42:32.478632 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:42:32.480075 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:42:32.480810 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:42:32.481555 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:42:32.482641 | orchestrator | 2025-05-13 19:42:32.483104 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-05-13 19:42:32.483630 | orchestrator | Tuesday 13 May 2025 19:42:32 +0000 (0:00:07.241) 0:03:43.748 *********** 2025-05-13 19:42:33.812847 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:42:33.814520 | orchestrator | ok: [testbed-manager] 2025-05-13 19:42:33.815824 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:42:33.816381 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:42:33.817250 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:42:33.818075 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:42:33.818686 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:42:33.819805 | orchestrator | 2025-05-13 19:42:33.821018 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-05-13 19:42:33.821838 | orchestrator | Tuesday 13 May 2025 19:42:33 +0000 (0:00:01.337) 0:03:45.085 *********** 2025-05-13 19:42:34.909002 | orchestrator | ok: [testbed-manager] 2025-05-13 19:42:34.909214 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:42:34.910995 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:42:34.911831 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:42:34.912822 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:42:34.913749 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:42:34.914633 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:42:34.915462 | orchestrator | 2025-05-13 19:42:34.917230 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-05-13 19:42:34.918477 | orchestrator | Tuesday 13 May 2025 19:42:34 +0000 (0:00:01.095) 0:03:46.181 *********** 2025-05-13 19:42:35.286378 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:42:35.291622 | orchestrator | 2025-05-13 19:42:35.291676 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-05-13 19:42:35.291690 | orchestrator | Tuesday 13 May 2025 19:42:35 +0000 (0:00:00.377) 0:03:46.558 *********** 2025-05-13 19:42:43.381915 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:42:43.382924 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:42:43.383362 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:42:43.384976 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:42:43.387263 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:42:43.388131 | orchestrator | changed: [testbed-manager] 2025-05-13 19:42:43.388703 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:42:43.389303 | orchestrator | 2025-05-13 19:42:43.390104 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-05-13 19:42:43.390613 | orchestrator | Tuesday 13 May 2025 19:42:43 +0000 (0:00:08.095) 0:03:54.653 *********** 2025-05-13 19:42:44.002203 | orchestrator | changed: [testbed-manager] 2025-05-13 19:42:44.002433 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:42:44.003085 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:42:44.005333 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:42:44.005933 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:42:44.006906 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:42:44.007486 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:42:44.008432 | orchestrator | 2025-05-13 19:42:44.008907 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-05-13 19:42:44.009806 | orchestrator | Tuesday 13 May 2025 19:42:43 +0000 (0:00:00.621) 0:03:55.275 *********** 2025-05-13 19:42:45.123805 | orchestrator | changed: [testbed-manager] 2025-05-13 19:42:45.123890 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:42:45.123945 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:42:45.124717 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:42:45.125562 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:42:45.126972 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:42:45.128387 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:42:45.129633 | orchestrator | 2025-05-13 19:42:45.130712 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-05-13 19:42:45.132043 | orchestrator | Tuesday 13 May 2025 19:42:45 +0000 (0:00:01.120) 0:03:56.396 *********** 2025-05-13 19:42:46.124492 | orchestrator | changed: [testbed-manager] 2025-05-13 19:42:46.125393 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:42:46.125973 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:42:46.127193 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:42:46.127748 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:42:46.128294 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:42:46.129111 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:42:46.129921 | orchestrator | 2025-05-13 19:42:46.130823 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-05-13 19:42:46.131533 | orchestrator | Tuesday 13 May 2025 19:42:46 +0000 (0:00:00.999) 0:03:57.396 *********** 2025-05-13 19:42:46.206968 | orchestrator | ok: [testbed-manager] 2025-05-13 19:42:46.239860 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:42:46.290920 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:42:46.327442 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:42:46.358409 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:42:46.445808 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:42:46.446536 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:42:46.447901 | orchestrator | 2025-05-13 19:42:46.448770 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-05-13 19:42:46.449485 | orchestrator | Tuesday 13 May 2025 19:42:46 +0000 (0:00:00.323) 0:03:57.720 *********** 2025-05-13 19:42:46.534680 | orchestrator | ok: [testbed-manager] 2025-05-13 19:42:46.599931 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:42:46.634627 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:42:46.667763 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:42:46.736320 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:42:46.737453 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:42:46.738453 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:42:46.739265 | orchestrator | 2025-05-13 19:42:46.740059 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-05-13 19:42:46.741679 | orchestrator | Tuesday 13 May 2025 19:42:46 +0000 (0:00:00.289) 0:03:58.009 *********** 2025-05-13 19:42:46.840631 | orchestrator | ok: [testbed-manager] 2025-05-13 19:42:46.874192 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:42:46.915023 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:42:46.965344 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:42:47.053598 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:42:47.054850 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:42:47.056558 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:42:47.057309 | orchestrator | 2025-05-13 19:42:47.057968 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-05-13 19:42:47.058799 | orchestrator | Tuesday 13 May 2025 19:42:47 +0000 (0:00:00.316) 0:03:58.326 *********** 2025-05-13 19:42:53.617213 | orchestrator | ok: [testbed-manager] 2025-05-13 19:42:53.617393 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:42:53.618142 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:42:53.618892 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:42:53.621445 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:42:53.622496 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:42:53.623127 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:42:53.624216 | orchestrator | 2025-05-13 19:42:53.624547 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-05-13 19:42:53.625293 | orchestrator | Tuesday 13 May 2025 19:42:53 +0000 (0:00:06.556) 0:04:04.883 *********** 2025-05-13 19:42:54.039626 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:42:54.039832 | orchestrator | 2025-05-13 19:42:54.042135 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-05-13 19:42:54.044400 | orchestrator | Tuesday 13 May 2025 19:42:54 +0000 (0:00:00.428) 0:04:05.311 *********** 2025-05-13 19:42:54.120044 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-05-13 19:42:54.122942 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-05-13 19:42:54.122991 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-05-13 19:42:54.170721 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:42:54.171446 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-05-13 19:42:54.172358 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-05-13 19:42:54.173006 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-05-13 19:42:54.205046 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:42:54.246581 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:42:54.246791 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-05-13 19:42:54.247443 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-05-13 19:42:54.248545 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-05-13 19:42:54.251257 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-05-13 19:42:54.278536 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:42:54.279469 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-05-13 19:42:54.352951 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:42:54.356243 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-05-13 19:42:54.357527 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:42:54.359099 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-05-13 19:42:54.360424 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-05-13 19:42:54.362973 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:42:54.363005 | orchestrator | 2025-05-13 19:42:54.363373 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-05-13 19:42:54.364431 | orchestrator | Tuesday 13 May 2025 19:42:54 +0000 (0:00:00.315) 0:04:05.626 *********** 2025-05-13 19:42:54.757099 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:42:54.757367 | orchestrator | 2025-05-13 19:42:54.758321 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-05-13 19:42:54.758936 | orchestrator | Tuesday 13 May 2025 19:42:54 +0000 (0:00:00.404) 0:04:06.030 *********** 2025-05-13 19:42:54.835430 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-05-13 19:42:54.835533 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-05-13 19:42:54.873810 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:42:54.873908 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-05-13 19:42:54.912335 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:42:54.912578 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-05-13 19:42:54.968514 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:42:54.968775 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-05-13 19:42:55.136703 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:42:55.136933 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-05-13 19:42:55.233666 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:42:55.233770 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:42:55.234423 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-05-13 19:42:55.235259 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:42:55.236174 | orchestrator | 2025-05-13 19:42:55.237034 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-05-13 19:42:55.237800 | orchestrator | Tuesday 13 May 2025 19:42:55 +0000 (0:00:00.475) 0:04:06.506 *********** 2025-05-13 19:42:55.652045 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:42:55.652924 | orchestrator | 2025-05-13 19:42:55.653554 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-05-13 19:42:55.654312 | orchestrator | Tuesday 13 May 2025 19:42:55 +0000 (0:00:00.420) 0:04:06.926 *********** 2025-05-13 19:43:29.421622 | orchestrator | changed: [testbed-manager] 2025-05-13 19:43:29.421735 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:43:29.421751 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:43:29.421763 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:43:29.421775 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:43:29.421912 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:43:29.423244 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:43:29.424401 | orchestrator | 2025-05-13 19:43:29.425218 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-05-13 19:43:29.426090 | orchestrator | Tuesday 13 May 2025 19:43:29 +0000 (0:00:33.762) 0:04:40.688 *********** 2025-05-13 19:43:37.048546 | orchestrator | changed: [testbed-manager] 2025-05-13 19:43:37.049260 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:43:37.050455 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:43:37.051616 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:43:37.052248 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:43:37.053133 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:43:37.053274 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:43:37.053670 | orchestrator | 2025-05-13 19:43:37.054132 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-05-13 19:43:37.056946 | orchestrator | Tuesday 13 May 2025 19:43:37 +0000 (0:00:07.633) 0:04:48.321 *********** 2025-05-13 19:43:44.313461 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:43:44.314084 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:43:44.315596 | orchestrator | changed: [testbed-manager] 2025-05-13 19:43:44.316339 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:43:44.317199 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:43:44.319060 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:43:44.319515 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:43:44.320563 | orchestrator | 2025-05-13 19:43:44.320755 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-05-13 19:43:44.321281 | orchestrator | Tuesday 13 May 2025 19:43:44 +0000 (0:00:07.262) 0:04:55.584 *********** 2025-05-13 19:43:45.971688 | orchestrator | ok: [testbed-manager] 2025-05-13 19:43:45.974314 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:43:45.974738 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:43:45.976403 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:43:45.977224 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:43:45.977983 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:43:45.978940 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:43:45.979865 | orchestrator | 2025-05-13 19:43:45.980427 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-05-13 19:43:45.981582 | orchestrator | Tuesday 13 May 2025 19:43:45 +0000 (0:00:01.656) 0:04:57.241 *********** 2025-05-13 19:43:51.350958 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:43:51.351151 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:43:51.351168 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:43:51.351252 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:43:51.351603 | orchestrator | changed: [testbed-manager] 2025-05-13 19:43:51.352910 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:43:51.352933 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:43:51.352946 | orchestrator | 2025-05-13 19:43:51.353839 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-05-13 19:43:51.353868 | orchestrator | Tuesday 13 May 2025 19:43:51 +0000 (0:00:05.381) 0:05:02.622 *********** 2025-05-13 19:43:51.781749 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:43:51.782877 | orchestrator | 2025-05-13 19:43:51.783878 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-05-13 19:43:51.784866 | orchestrator | Tuesday 13 May 2025 19:43:51 +0000 (0:00:00.431) 0:05:03.053 *********** 2025-05-13 19:43:52.523938 | orchestrator | changed: [testbed-manager] 2025-05-13 19:43:52.528081 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:43:52.528132 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:43:52.528773 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:43:52.530721 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:43:52.531745 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:43:52.533465 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:43:52.534432 | orchestrator | 2025-05-13 19:43:52.534910 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-05-13 19:43:52.535778 | orchestrator | Tuesday 13 May 2025 19:43:52 +0000 (0:00:00.742) 0:05:03.796 *********** 2025-05-13 19:43:54.061520 | orchestrator | ok: [testbed-manager] 2025-05-13 19:43:54.061733 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:43:54.061808 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:43:54.065843 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:43:54.067070 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:43:54.067708 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:43:54.068670 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:43:54.069235 | orchestrator | 2025-05-13 19:43:54.070438 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-05-13 19:43:54.070565 | orchestrator | Tuesday 13 May 2025 19:43:54 +0000 (0:00:01.536) 0:05:05.333 *********** 2025-05-13 19:43:54.829156 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:43:54.829350 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:43:54.829875 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:43:54.832070 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:43:54.832778 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:43:54.833967 | orchestrator | changed: [testbed-manager] 2025-05-13 19:43:54.835008 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:43:54.836059 | orchestrator | 2025-05-13 19:43:54.836933 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-05-13 19:43:54.837733 | orchestrator | Tuesday 13 May 2025 19:43:54 +0000 (0:00:00.769) 0:05:06.102 *********** 2025-05-13 19:43:54.913117 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:43:54.950581 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:43:54.984906 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:43:55.028439 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:43:55.077902 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:43:55.147911 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:43:55.148471 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:43:55.149204 | orchestrator | 2025-05-13 19:43:55.150122 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-05-13 19:43:55.150772 | orchestrator | Tuesday 13 May 2025 19:43:55 +0000 (0:00:00.319) 0:05:06.422 *********** 2025-05-13 19:43:55.216222 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:43:55.249609 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:43:55.283710 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:43:55.312287 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:43:55.343691 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:43:55.537890 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:43:55.540661 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:43:55.541923 | orchestrator | 2025-05-13 19:43:55.542650 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-05-13 19:43:55.543420 | orchestrator | Tuesday 13 May 2025 19:43:55 +0000 (0:00:00.386) 0:05:06.808 *********** 2025-05-13 19:43:55.644809 | orchestrator | ok: [testbed-manager] 2025-05-13 19:43:55.680088 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:43:55.745494 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:43:55.785870 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:43:55.853658 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:43:55.854921 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:43:55.858283 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:43:55.858388 | orchestrator | 2025-05-13 19:43:55.858416 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-05-13 19:43:55.858431 | orchestrator | Tuesday 13 May 2025 19:43:55 +0000 (0:00:00.318) 0:05:07.127 *********** 2025-05-13 19:43:55.965724 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:43:55.997352 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:43:56.040026 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:43:56.085280 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:43:56.150711 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:43:56.151841 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:43:56.152031 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:43:56.152745 | orchestrator | 2025-05-13 19:43:56.153065 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-05-13 19:43:56.153611 | orchestrator | Tuesday 13 May 2025 19:43:56 +0000 (0:00:00.298) 0:05:07.426 *********** 2025-05-13 19:43:56.274191 | orchestrator | ok: [testbed-manager] 2025-05-13 19:43:56.437668 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:43:56.472870 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:43:56.509270 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:43:56.596277 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:43:56.597073 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:43:56.597992 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:43:56.598684 | orchestrator | 2025-05-13 19:43:56.599215 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-05-13 19:43:56.600349 | orchestrator | Tuesday 13 May 2025 19:43:56 +0000 (0:00:00.443) 0:05:07.869 *********** 2025-05-13 19:43:56.721141 | orchestrator | ok: [testbed-manager] =>  2025-05-13 19:43:56.725251 | orchestrator |  docker_version: 5:27.5.1 2025-05-13 19:43:56.768214 | orchestrator | ok: [testbed-node-0] =>  2025-05-13 19:43:56.768404 | orchestrator |  docker_version: 5:27.5.1 2025-05-13 19:43:56.799144 | orchestrator | ok: [testbed-node-1] =>  2025-05-13 19:43:56.799202 | orchestrator |  docker_version: 5:27.5.1 2025-05-13 19:43:56.837418 | orchestrator | ok: [testbed-node-2] =>  2025-05-13 19:43:56.837581 | orchestrator |  docker_version: 5:27.5.1 2025-05-13 19:43:56.898522 | orchestrator | ok: [testbed-node-3] =>  2025-05-13 19:43:56.899283 | orchestrator |  docker_version: 5:27.5.1 2025-05-13 19:43:56.901256 | orchestrator | ok: [testbed-node-4] =>  2025-05-13 19:43:56.901605 | orchestrator |  docker_version: 5:27.5.1 2025-05-13 19:43:56.905880 | orchestrator | ok: [testbed-node-5] =>  2025-05-13 19:43:56.905910 | orchestrator |  docker_version: 5:27.5.1 2025-05-13 19:43:56.905922 | orchestrator | 2025-05-13 19:43:56.906966 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-05-13 19:43:56.908095 | orchestrator | Tuesday 13 May 2025 19:43:56 +0000 (0:00:00.303) 0:05:08.173 *********** 2025-05-13 19:43:57.021511 | orchestrator | ok: [testbed-manager] =>  2025-05-13 19:43:57.022978 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-13 19:43:57.078219 | orchestrator | ok: [testbed-node-0] =>  2025-05-13 19:43:57.078375 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-13 19:43:57.115163 | orchestrator | ok: [testbed-node-1] =>  2025-05-13 19:43:57.115245 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-13 19:43:57.151168 | orchestrator | ok: [testbed-node-2] =>  2025-05-13 19:43:57.152486 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-13 19:43:57.216469 | orchestrator | ok: [testbed-node-3] =>  2025-05-13 19:43:57.218722 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-13 19:43:57.220529 | orchestrator | ok: [testbed-node-4] =>  2025-05-13 19:43:57.222231 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-13 19:43:57.223209 | orchestrator | ok: [testbed-node-5] =>  2025-05-13 19:43:57.224264 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-13 19:43:57.225405 | orchestrator | 2025-05-13 19:43:57.225847 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-05-13 19:43:57.226423 | orchestrator | Tuesday 13 May 2025 19:43:57 +0000 (0:00:00.316) 0:05:08.489 *********** 2025-05-13 19:43:57.282704 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:43:57.314950 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:43:57.346436 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:43:57.381811 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:43:57.484471 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:43:57.484569 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:43:57.484583 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:43:57.484595 | orchestrator | 2025-05-13 19:43:57.485067 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-05-13 19:43:57.485610 | orchestrator | Tuesday 13 May 2025 19:43:57 +0000 (0:00:00.263) 0:05:08.752 *********** 2025-05-13 19:43:57.588977 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:43:57.622891 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:43:57.661252 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:43:57.694522 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:43:57.764809 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:43:57.765440 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:43:57.767479 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:43:57.768339 | orchestrator | 2025-05-13 19:43:57.769761 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-05-13 19:43:57.772361 | orchestrator | Tuesday 13 May 2025 19:43:57 +0000 (0:00:00.287) 0:05:09.039 *********** 2025-05-13 19:43:58.190480 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:43:58.192059 | orchestrator | 2025-05-13 19:43:58.192276 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-05-13 19:43:58.193715 | orchestrator | Tuesday 13 May 2025 19:43:58 +0000 (0:00:00.423) 0:05:09.463 *********** 2025-05-13 19:43:58.994226 | orchestrator | ok: [testbed-manager] 2025-05-13 19:43:58.996499 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:43:58.996534 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:43:58.996965 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:43:58.998109 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:43:58.998908 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:43:58.999648 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:43:59.000421 | orchestrator | 2025-05-13 19:43:59.001122 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-05-13 19:43:59.001735 | orchestrator | Tuesday 13 May 2025 19:43:58 +0000 (0:00:00.802) 0:05:10.265 *********** 2025-05-13 19:44:01.738162 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:44:01.745290 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:44:01.749525 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:44:01.749554 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:44:01.751178 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:44:01.755353 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:44:01.755887 | orchestrator | ok: [testbed-manager] 2025-05-13 19:44:01.756591 | orchestrator | 2025-05-13 19:44:01.757398 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-05-13 19:44:01.758013 | orchestrator | Tuesday 13 May 2025 19:44:01 +0000 (0:00:02.744) 0:05:13.010 *********** 2025-05-13 19:44:01.968107 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-05-13 19:44:01.968615 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-05-13 19:44:01.969468 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-05-13 19:44:02.046490 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:44:02.047510 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-05-13 19:44:02.047542 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-05-13 19:44:02.047631 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-05-13 19:44:02.126947 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:44:02.128301 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-05-13 19:44:02.128963 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-05-13 19:44:02.129591 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-05-13 19:44:02.214779 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:44:02.217163 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-05-13 19:44:02.217201 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-05-13 19:44:02.292645 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-05-13 19:44:02.293948 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-05-13 19:44:02.295006 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-05-13 19:44:02.295807 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-05-13 19:44:02.363962 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:44:02.365250 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-05-13 19:44:02.368036 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-05-13 19:44:02.517363 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:44:02.518337 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-05-13 19:44:02.523059 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:44:02.523158 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-05-13 19:44:02.523175 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-05-13 19:44:02.523187 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-05-13 19:44:02.523427 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:44:02.524033 | orchestrator | 2025-05-13 19:44:02.524706 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-05-13 19:44:02.525793 | orchestrator | Tuesday 13 May 2025 19:44:02 +0000 (0:00:00.781) 0:05:13.791 *********** 2025-05-13 19:44:08.570727 | orchestrator | ok: [testbed-manager] 2025-05-13 19:44:08.570867 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:44:08.570885 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:44:08.570958 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:44:08.571041 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:44:08.571407 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:44:08.571672 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:44:08.571925 | orchestrator | 2025-05-13 19:44:08.572245 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-05-13 19:44:08.572771 | orchestrator | Tuesday 13 May 2025 19:44:08 +0000 (0:00:06.049) 0:05:19.840 *********** 2025-05-13 19:44:09.611829 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:44:09.611929 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:44:09.612384 | orchestrator | ok: [testbed-manager] 2025-05-13 19:44:09.613180 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:44:09.614348 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:44:09.615058 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:44:09.615857 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:44:09.616533 | orchestrator | 2025-05-13 19:44:09.617180 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-05-13 19:44:09.617898 | orchestrator | Tuesday 13 May 2025 19:44:09 +0000 (0:00:01.041) 0:05:20.882 *********** 2025-05-13 19:44:16.927645 | orchestrator | ok: [testbed-manager] 2025-05-13 19:44:16.928794 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:44:16.931407 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:44:16.932595 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:44:16.933558 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:44:16.935360 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:44:16.937346 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:44:16.937652 | orchestrator | 2025-05-13 19:44:16.938866 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-05-13 19:44:16.939803 | orchestrator | Tuesday 13 May 2025 19:44:16 +0000 (0:00:07.318) 0:05:28.200 *********** 2025-05-13 19:44:20.225084 | orchestrator | changed: [testbed-manager] 2025-05-13 19:44:20.225169 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:44:20.225937 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:44:20.227298 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:44:20.228607 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:44:20.229050 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:44:20.229771 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:44:20.230473 | orchestrator | 2025-05-13 19:44:20.231316 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-05-13 19:44:20.231818 | orchestrator | Tuesday 13 May 2025 19:44:20 +0000 (0:00:03.292) 0:05:31.492 *********** 2025-05-13 19:44:21.505837 | orchestrator | ok: [testbed-manager] 2025-05-13 19:44:21.505998 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:44:21.506889 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:44:21.508932 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:44:21.509883 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:44:21.510924 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:44:21.512021 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:44:21.513431 | orchestrator | 2025-05-13 19:44:21.514164 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-05-13 19:44:21.514999 | orchestrator | Tuesday 13 May 2025 19:44:21 +0000 (0:00:01.283) 0:05:32.776 *********** 2025-05-13 19:44:22.812753 | orchestrator | ok: [testbed-manager] 2025-05-13 19:44:22.812996 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:44:22.817176 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:44:22.817207 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:44:22.817219 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:44:22.817231 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:44:22.817287 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:44:22.817912 | orchestrator | 2025-05-13 19:44:22.818835 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-05-13 19:44:22.819329 | orchestrator | Tuesday 13 May 2025 19:44:22 +0000 (0:00:01.308) 0:05:34.085 *********** 2025-05-13 19:44:23.021140 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:44:23.087780 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:44:23.155804 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:44:23.224304 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:44:23.414738 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:44:23.415604 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:44:23.422560 | orchestrator | changed: [testbed-manager] 2025-05-13 19:44:23.425925 | orchestrator | 2025-05-13 19:44:23.426494 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-05-13 19:44:23.427122 | orchestrator | Tuesday 13 May 2025 19:44:23 +0000 (0:00:00.599) 0:05:34.684 *********** 2025-05-13 19:44:32.673591 | orchestrator | ok: [testbed-manager] 2025-05-13 19:44:32.673705 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:44:32.673846 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:44:32.675022 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:44:32.676527 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:44:32.678675 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:44:32.679773 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:44:32.681158 | orchestrator | 2025-05-13 19:44:32.682316 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-05-13 19:44:32.683442 | orchestrator | Tuesday 13 May 2025 19:44:32 +0000 (0:00:09.259) 0:05:43.944 *********** 2025-05-13 19:44:33.269407 | orchestrator | changed: [testbed-manager] 2025-05-13 19:44:33.342693 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:44:33.839993 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:44:33.840148 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:44:33.843933 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:44:33.843960 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:44:33.843972 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:44:33.844197 | orchestrator | 2025-05-13 19:44:33.845305 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-05-13 19:44:33.847677 | orchestrator | Tuesday 13 May 2025 19:44:33 +0000 (0:00:01.164) 0:05:45.108 *********** 2025-05-13 19:44:42.099943 | orchestrator | ok: [testbed-manager] 2025-05-13 19:44:42.100698 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:44:42.101627 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:44:42.104005 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:44:42.105128 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:44:42.106535 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:44:42.107453 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:44:42.107783 | orchestrator | 2025-05-13 19:44:42.108867 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-05-13 19:44:42.109765 | orchestrator | Tuesday 13 May 2025 19:44:42 +0000 (0:00:08.262) 0:05:53.371 *********** 2025-05-13 19:44:52.124602 | orchestrator | ok: [testbed-manager] 2025-05-13 19:44:52.124741 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:44:52.126511 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:44:52.126551 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:44:52.126563 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:44:52.126908 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:44:52.127256 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:44:52.128171 | orchestrator | 2025-05-13 19:44:52.128739 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-05-13 19:44:52.128772 | orchestrator | Tuesday 13 May 2025 19:44:52 +0000 (0:00:10.023) 0:06:03.394 *********** 2025-05-13 19:44:52.466463 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-05-13 19:44:53.301326 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-05-13 19:44:53.302161 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-05-13 19:44:53.304226 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-05-13 19:44:53.305261 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-05-13 19:44:53.307330 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-05-13 19:44:53.309102 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-05-13 19:44:53.310351 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-05-13 19:44:53.311971 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-05-13 19:44:53.313136 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-05-13 19:44:53.313727 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-05-13 19:44:53.315009 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-05-13 19:44:53.316920 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-05-13 19:44:53.318297 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-05-13 19:44:53.318571 | orchestrator | 2025-05-13 19:44:53.319989 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-05-13 19:44:53.321138 | orchestrator | Tuesday 13 May 2025 19:44:53 +0000 (0:00:01.179) 0:06:04.574 *********** 2025-05-13 19:44:53.429181 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:44:53.502733 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:44:53.560511 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:44:53.619192 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:44:53.715050 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:44:53.839041 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:44:53.846007 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:44:53.846432 | orchestrator | 2025-05-13 19:44:53.846738 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-05-13 19:44:53.847333 | orchestrator | Tuesday 13 May 2025 19:44:53 +0000 (0:00:00.535) 0:06:05.109 *********** 2025-05-13 19:44:57.502230 | orchestrator | ok: [testbed-manager] 2025-05-13 19:44:57.503074 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:44:57.503116 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:44:57.503130 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:44:57.503156 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:44:57.503167 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:44:57.503280 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:44:57.503826 | orchestrator | 2025-05-13 19:44:57.504434 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-05-13 19:44:57.505724 | orchestrator | Tuesday 13 May 2025 19:44:57 +0000 (0:00:03.655) 0:06:08.765 *********** 2025-05-13 19:44:57.642902 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:44:57.708733 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:44:57.775520 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:44:57.858817 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:44:57.925423 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:44:58.033217 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:44:58.033788 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:44:58.034931 | orchestrator | 2025-05-13 19:44:58.035519 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-05-13 19:44:58.036097 | orchestrator | Tuesday 13 May 2025 19:44:58 +0000 (0:00:00.542) 0:06:09.308 *********** 2025-05-13 19:44:58.105576 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-05-13 19:44:58.106931 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-05-13 19:44:58.180836 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:44:58.180908 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-05-13 19:44:58.184970 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-05-13 19:44:58.278246 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:44:58.280399 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-05-13 19:44:58.281333 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-05-13 19:44:58.356261 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:44:58.356506 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-05-13 19:44:58.357452 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-05-13 19:44:58.441777 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:44:58.443516 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-05-13 19:44:58.444024 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-05-13 19:44:58.517010 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:44:58.517642 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-05-13 19:44:58.517740 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-05-13 19:44:58.630444 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:44:58.631718 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-05-13 19:44:58.633067 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-05-13 19:44:58.634429 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:44:58.635615 | orchestrator | 2025-05-13 19:44:58.636597 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-05-13 19:44:58.637558 | orchestrator | Tuesday 13 May 2025 19:44:58 +0000 (0:00:00.593) 0:06:09.902 *********** 2025-05-13 19:44:58.767222 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:44:58.832096 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:44:58.894907 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:44:58.965333 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:44:59.030670 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:44:59.151166 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:44:59.153046 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:44:59.154345 | orchestrator | 2025-05-13 19:44:59.156847 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-05-13 19:44:59.156879 | orchestrator | Tuesday 13 May 2025 19:44:59 +0000 (0:00:00.520) 0:06:10.422 *********** 2025-05-13 19:44:59.300503 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:44:59.366074 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:44:59.437059 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:44:59.501322 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:44:59.564217 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:44:59.675100 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:44:59.676986 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:44:59.680574 | orchestrator | 2025-05-13 19:44:59.680656 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-05-13 19:44:59.680673 | orchestrator | Tuesday 13 May 2025 19:44:59 +0000 (0:00:00.526) 0:06:10.949 *********** 2025-05-13 19:44:59.988079 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:45:00.051997 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:45:00.125150 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:45:00.196562 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:45:00.284694 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:45:00.425865 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:45:00.427706 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:45:00.428554 | orchestrator | 2025-05-13 19:45:00.429216 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-05-13 19:45:00.429954 | orchestrator | Tuesday 13 May 2025 19:45:00 +0000 (0:00:00.749) 0:06:11.698 *********** 2025-05-13 19:45:02.026793 | orchestrator | ok: [testbed-manager] 2025-05-13 19:45:02.026891 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:45:02.028353 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:45:02.029529 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:45:02.030222 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:45:02.031325 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:45:02.032078 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:45:02.032724 | orchestrator | 2025-05-13 19:45:02.033497 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-05-13 19:45:02.034342 | orchestrator | Tuesday 13 May 2025 19:45:02 +0000 (0:00:01.599) 0:06:13.297 *********** 2025-05-13 19:45:02.903872 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:45:02.904068 | orchestrator | 2025-05-13 19:45:02.904452 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-05-13 19:45:02.905560 | orchestrator | Tuesday 13 May 2025 19:45:02 +0000 (0:00:00.876) 0:06:14.174 *********** 2025-05-13 19:45:03.536879 | orchestrator | ok: [testbed-manager] 2025-05-13 19:45:03.945023 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:45:03.945173 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:45:03.945436 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:45:03.945865 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:45:03.946542 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:45:03.951914 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:45:03.952005 | orchestrator | 2025-05-13 19:45:03.952022 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-05-13 19:45:03.952036 | orchestrator | Tuesday 13 May 2025 19:45:03 +0000 (0:00:01.043) 0:06:15.217 *********** 2025-05-13 19:45:04.383310 | orchestrator | ok: [testbed-manager] 2025-05-13 19:45:04.831817 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:45:04.831986 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:45:04.833003 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:45:04.833577 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:45:04.834663 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:45:04.835203 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:45:04.835913 | orchestrator | 2025-05-13 19:45:04.837041 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-05-13 19:45:04.837560 | orchestrator | Tuesday 13 May 2025 19:45:04 +0000 (0:00:00.886) 0:06:16.103 *********** 2025-05-13 19:45:06.269115 | orchestrator | ok: [testbed-manager] 2025-05-13 19:45:06.270185 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:45:06.271319 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:45:06.271954 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:45:06.273096 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:45:06.273876 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:45:06.274992 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:45:06.275916 | orchestrator | 2025-05-13 19:45:06.277236 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-05-13 19:45:06.277803 | orchestrator | Tuesday 13 May 2025 19:45:06 +0000 (0:00:01.436) 0:06:17.539 *********** 2025-05-13 19:45:06.405887 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:45:07.593432 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:45:07.594212 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:45:07.594647 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:45:07.595193 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:45:07.596659 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:45:07.597015 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:45:07.598952 | orchestrator | 2025-05-13 19:45:07.598981 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-05-13 19:45:07.598994 | orchestrator | Tuesday 13 May 2025 19:45:07 +0000 (0:00:01.325) 0:06:18.865 *********** 2025-05-13 19:45:08.882549 | orchestrator | ok: [testbed-manager] 2025-05-13 19:45:08.882773 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:45:08.883730 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:45:08.885967 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:45:08.885991 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:45:08.886255 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:45:08.888741 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:45:08.889562 | orchestrator | 2025-05-13 19:45:08.890674 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-05-13 19:45:08.891479 | orchestrator | Tuesday 13 May 2025 19:45:08 +0000 (0:00:01.289) 0:06:20.154 *********** 2025-05-13 19:45:10.462533 | orchestrator | changed: [testbed-manager] 2025-05-13 19:45:10.463525 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:45:10.464553 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:45:10.465646 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:45:10.466456 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:45:10.467903 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:45:10.469143 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:45:10.469882 | orchestrator | 2025-05-13 19:45:10.470773 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-05-13 19:45:10.472312 | orchestrator | Tuesday 13 May 2025 19:45:10 +0000 (0:00:01.579) 0:06:21.734 *********** 2025-05-13 19:45:11.384007 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:45:11.386206 | orchestrator | 2025-05-13 19:45:11.386261 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-05-13 19:45:11.386744 | orchestrator | Tuesday 13 May 2025 19:45:11 +0000 (0:00:00.918) 0:06:22.652 *********** 2025-05-13 19:45:12.716622 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:45:12.716791 | orchestrator | ok: [testbed-manager] 2025-05-13 19:45:12.717810 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:45:12.718950 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:45:12.719827 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:45:12.720787 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:45:12.721374 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:45:12.722628 | orchestrator | 2025-05-13 19:45:12.723941 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-05-13 19:45:12.724743 | orchestrator | Tuesday 13 May 2025 19:45:12 +0000 (0:00:01.335) 0:06:23.987 *********** 2025-05-13 19:45:13.825947 | orchestrator | ok: [testbed-manager] 2025-05-13 19:45:13.826100 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:45:13.826601 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:45:13.829617 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:45:13.830553 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:45:13.831303 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:45:13.831669 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:45:13.832134 | orchestrator | 2025-05-13 19:45:13.832905 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-05-13 19:45:13.834440 | orchestrator | Tuesday 13 May 2025 19:45:13 +0000 (0:00:01.109) 0:06:25.097 *********** 2025-05-13 19:45:15.217127 | orchestrator | ok: [testbed-manager] 2025-05-13 19:45:15.217575 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:45:15.221501 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:45:15.221565 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:45:15.222108 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:45:15.223441 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:45:15.224719 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:45:15.225187 | orchestrator | 2025-05-13 19:45:15.226595 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-05-13 19:45:15.227322 | orchestrator | Tuesday 13 May 2025 19:45:15 +0000 (0:00:01.392) 0:06:26.489 *********** 2025-05-13 19:45:16.361528 | orchestrator | ok: [testbed-manager] 2025-05-13 19:45:16.362366 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:45:16.363349 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:45:16.364632 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:45:16.365098 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:45:16.365835 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:45:16.366798 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:45:16.367381 | orchestrator | 2025-05-13 19:45:16.367925 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-05-13 19:45:16.368347 | orchestrator | Tuesday 13 May 2025 19:45:16 +0000 (0:00:01.142) 0:06:27.632 *********** 2025-05-13 19:45:17.727868 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:45:17.730116 | orchestrator | 2025-05-13 19:45:17.731763 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-13 19:45:17.732855 | orchestrator | Tuesday 13 May 2025 19:45:17 +0000 (0:00:00.904) 0:06:28.536 *********** 2025-05-13 19:45:17.733950 | orchestrator | 2025-05-13 19:45:17.734672 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-13 19:45:17.736231 | orchestrator | Tuesday 13 May 2025 19:45:17 +0000 (0:00:00.044) 0:06:28.581 *********** 2025-05-13 19:45:17.737186 | orchestrator | 2025-05-13 19:45:17.738430 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-13 19:45:17.739867 | orchestrator | Tuesday 13 May 2025 19:45:17 +0000 (0:00:00.039) 0:06:28.620 *********** 2025-05-13 19:45:17.740430 | orchestrator | 2025-05-13 19:45:17.741504 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-13 19:45:17.742525 | orchestrator | Tuesday 13 May 2025 19:45:17 +0000 (0:00:00.039) 0:06:28.659 *********** 2025-05-13 19:45:17.742832 | orchestrator | 2025-05-13 19:45:17.743525 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-13 19:45:17.744040 | orchestrator | Tuesday 13 May 2025 19:45:17 +0000 (0:00:00.215) 0:06:28.875 *********** 2025-05-13 19:45:17.744741 | orchestrator | 2025-05-13 19:45:17.745304 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-13 19:45:17.745765 | orchestrator | Tuesday 13 May 2025 19:45:17 +0000 (0:00:00.039) 0:06:28.914 *********** 2025-05-13 19:45:17.746183 | orchestrator | 2025-05-13 19:45:17.746755 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-13 19:45:17.747150 | orchestrator | Tuesday 13 May 2025 19:45:17 +0000 (0:00:00.038) 0:06:28.953 *********** 2025-05-13 19:45:17.747855 | orchestrator | 2025-05-13 19:45:17.748212 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-13 19:45:17.748937 | orchestrator | Tuesday 13 May 2025 19:45:17 +0000 (0:00:00.046) 0:06:28.999 *********** 2025-05-13 19:45:18.897219 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:45:18.898301 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:45:18.898470 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:45:18.898954 | orchestrator | 2025-05-13 19:45:18.899577 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-05-13 19:45:18.900670 | orchestrator | Tuesday 13 May 2025 19:45:18 +0000 (0:00:01.168) 0:06:30.168 *********** 2025-05-13 19:45:20.217335 | orchestrator | changed: [testbed-manager] 2025-05-13 19:45:20.217636 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:45:20.217774 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:45:20.219219 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:45:20.220010 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:45:20.220462 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:45:20.221022 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:45:20.221892 | orchestrator | 2025-05-13 19:45:20.222545 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-05-13 19:45:20.223223 | orchestrator | Tuesday 13 May 2025 19:45:20 +0000 (0:00:01.319) 0:06:31.488 *********** 2025-05-13 19:45:21.334256 | orchestrator | changed: [testbed-manager] 2025-05-13 19:45:21.335051 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:45:21.336169 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:45:21.337452 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:45:21.339051 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:45:21.339141 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:45:21.339986 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:45:21.341050 | orchestrator | 2025-05-13 19:45:21.341818 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-05-13 19:45:21.342996 | orchestrator | Tuesday 13 May 2025 19:45:21 +0000 (0:00:01.116) 0:06:32.604 *********** 2025-05-13 19:45:21.460683 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:45:23.687855 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:45:23.688136 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:45:23.688164 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:45:23.690327 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:45:23.690829 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:45:23.691447 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:45:23.691946 | orchestrator | 2025-05-13 19:45:23.692339 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-05-13 19:45:23.693486 | orchestrator | Tuesday 13 May 2025 19:45:23 +0000 (0:00:02.351) 0:06:34.956 *********** 2025-05-13 19:45:23.785775 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:45:23.786518 | orchestrator | 2025-05-13 19:45:23.787640 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-05-13 19:45:23.789246 | orchestrator | Tuesday 13 May 2025 19:45:23 +0000 (0:00:00.100) 0:06:35.057 *********** 2025-05-13 19:45:25.010212 | orchestrator | ok: [testbed-manager] 2025-05-13 19:45:25.012197 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:45:25.012827 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:45:25.014214 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:45:25.015458 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:45:25.016739 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:45:25.017472 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:45:25.018173 | orchestrator | 2025-05-13 19:45:25.018939 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-05-13 19:45:25.019332 | orchestrator | Tuesday 13 May 2025 19:45:24 +0000 (0:00:01.223) 0:06:36.280 *********** 2025-05-13 19:45:25.151556 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:45:25.223844 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:45:25.289146 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:45:25.352255 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:45:25.422740 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:45:25.548481 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:45:25.549079 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:45:25.550230 | orchestrator | 2025-05-13 19:45:25.551013 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-05-13 19:45:25.551816 | orchestrator | Tuesday 13 May 2025 19:45:25 +0000 (0:00:00.537) 0:06:36.818 *********** 2025-05-13 19:45:26.445499 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:45:26.445923 | orchestrator | 2025-05-13 19:45:26.447111 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-05-13 19:45:26.450224 | orchestrator | Tuesday 13 May 2025 19:45:26 +0000 (0:00:00.898) 0:06:37.716 *********** 2025-05-13 19:45:26.882781 | orchestrator | ok: [testbed-manager] 2025-05-13 19:45:27.303701 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:45:27.304166 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:45:27.304194 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:45:27.304699 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:45:27.305330 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:45:27.305760 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:45:27.306142 | orchestrator | 2025-05-13 19:45:27.307657 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-05-13 19:45:27.308393 | orchestrator | Tuesday 13 May 2025 19:45:27 +0000 (0:00:00.859) 0:06:38.575 *********** 2025-05-13 19:45:30.068754 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-05-13 19:45:30.070436 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-05-13 19:45:30.070984 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-05-13 19:45:30.072631 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-05-13 19:45:30.073887 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-05-13 19:45:30.076906 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-05-13 19:45:30.076970 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-05-13 19:45:30.078559 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-05-13 19:45:30.079515 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-05-13 19:45:30.080190 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-05-13 19:45:30.082675 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-05-13 19:45:30.083074 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-05-13 19:45:30.084298 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-05-13 19:45:30.087110 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-05-13 19:45:30.087748 | orchestrator | 2025-05-13 19:45:30.088317 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-05-13 19:45:30.088920 | orchestrator | Tuesday 13 May 2025 19:45:30 +0000 (0:00:02.763) 0:06:41.339 *********** 2025-05-13 19:45:30.197746 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:45:30.272341 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:45:30.338399 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:45:30.405514 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:45:30.479629 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:45:30.567273 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:45:30.571305 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:45:30.571714 | orchestrator | 2025-05-13 19:45:30.572168 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-05-13 19:45:30.572679 | orchestrator | Tuesday 13 May 2025 19:45:30 +0000 (0:00:00.497) 0:06:41.836 *********** 2025-05-13 19:45:31.575765 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:45:31.576585 | orchestrator | 2025-05-13 19:45:31.577225 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-05-13 19:45:31.577952 | orchestrator | Tuesday 13 May 2025 19:45:31 +0000 (0:00:01.010) 0:06:42.847 *********** 2025-05-13 19:45:31.999556 | orchestrator | ok: [testbed-manager] 2025-05-13 19:45:32.425484 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:45:32.426128 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:45:32.427381 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:45:32.428128 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:45:32.428780 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:45:32.430143 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:45:32.431166 | orchestrator | 2025-05-13 19:45:32.431886 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-05-13 19:45:32.432940 | orchestrator | Tuesday 13 May 2025 19:45:32 +0000 (0:00:00.849) 0:06:43.696 *********** 2025-05-13 19:45:32.844813 | orchestrator | ok: [testbed-manager] 2025-05-13 19:45:33.233120 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:45:33.233645 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:45:33.234275 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:45:33.235055 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:45:33.235689 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:45:33.236277 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:45:33.236850 | orchestrator | 2025-05-13 19:45:33.237467 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-05-13 19:45:33.238077 | orchestrator | Tuesday 13 May 2025 19:45:33 +0000 (0:00:00.809) 0:06:44.505 *********** 2025-05-13 19:45:33.399647 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:45:33.463565 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:45:33.534793 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:45:33.600401 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:45:33.667778 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:45:33.776462 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:45:33.779050 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:45:33.779885 | orchestrator | 2025-05-13 19:45:33.784479 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-05-13 19:45:33.784543 | orchestrator | Tuesday 13 May 2025 19:45:33 +0000 (0:00:00.545) 0:06:45.051 *********** 2025-05-13 19:45:35.349204 | orchestrator | ok: [testbed-manager] 2025-05-13 19:45:35.350181 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:45:35.354613 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:45:35.354654 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:45:35.356034 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:45:35.357231 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:45:35.358124 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:45:35.358872 | orchestrator | 2025-05-13 19:45:35.359665 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-05-13 19:45:35.360371 | orchestrator | Tuesday 13 May 2025 19:45:35 +0000 (0:00:01.568) 0:06:46.620 *********** 2025-05-13 19:45:35.482508 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:45:35.546297 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:45:35.613129 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:45:35.858064 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:45:35.923404 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:45:36.015789 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:45:36.017098 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:45:36.017958 | orchestrator | 2025-05-13 19:45:36.018656 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-05-13 19:45:36.020828 | orchestrator | Tuesday 13 May 2025 19:45:36 +0000 (0:00:00.668) 0:06:47.288 *********** 2025-05-13 19:45:43.127892 | orchestrator | ok: [testbed-manager] 2025-05-13 19:45:43.128286 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:45:43.129346 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:45:43.129547 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:45:43.131129 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:45:43.133619 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:45:43.133776 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:45:43.134123 | orchestrator | 2025-05-13 19:45:43.134883 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-05-13 19:45:43.135596 | orchestrator | Tuesday 13 May 2025 19:45:43 +0000 (0:00:07.112) 0:06:54.401 *********** 2025-05-13 19:45:44.504578 | orchestrator | ok: [testbed-manager] 2025-05-13 19:45:44.504804 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:45:44.506726 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:45:44.507764 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:45:44.508535 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:45:44.509486 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:45:44.510835 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:45:44.512524 | orchestrator | 2025-05-13 19:45:44.512780 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-05-13 19:45:44.514572 | orchestrator | Tuesday 13 May 2025 19:45:44 +0000 (0:00:01.374) 0:06:55.775 *********** 2025-05-13 19:45:46.184554 | orchestrator | ok: [testbed-manager] 2025-05-13 19:45:46.185322 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:45:46.186871 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:45:46.186977 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:45:46.188304 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:45:46.188497 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:45:46.188974 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:45:46.190096 | orchestrator | 2025-05-13 19:45:46.191140 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-05-13 19:45:46.191786 | orchestrator | Tuesday 13 May 2025 19:45:46 +0000 (0:00:01.679) 0:06:57.455 *********** 2025-05-13 19:45:47.991826 | orchestrator | ok: [testbed-manager] 2025-05-13 19:45:47.992313 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:45:47.992780 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:45:47.996089 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:45:47.996132 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:45:47.997306 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:45:47.997791 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:45:47.998425 | orchestrator | 2025-05-13 19:45:47.998985 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-13 19:45:47.999611 | orchestrator | Tuesday 13 May 2025 19:45:47 +0000 (0:00:01.807) 0:06:59.263 *********** 2025-05-13 19:45:48.452205 | orchestrator | ok: [testbed-manager] 2025-05-13 19:45:48.879362 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:45:48.880474 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:45:48.880957 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:45:48.881827 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:45:48.882889 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:45:48.883869 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:45:48.884788 | orchestrator | 2025-05-13 19:45:48.885792 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-13 19:45:48.886286 | orchestrator | Tuesday 13 May 2025 19:45:48 +0000 (0:00:00.885) 0:07:00.148 *********** 2025-05-13 19:45:49.015064 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:45:49.081184 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:45:49.156278 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:45:49.219157 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:45:49.288730 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:45:49.687888 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:45:49.688293 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:45:49.689757 | orchestrator | 2025-05-13 19:45:49.690272 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-05-13 19:45:49.691271 | orchestrator | Tuesday 13 May 2025 19:45:49 +0000 (0:00:00.812) 0:07:00.961 *********** 2025-05-13 19:45:49.821907 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:45:49.884124 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:45:49.952975 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:45:50.016718 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:45:50.079991 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:45:50.186706 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:45:50.187505 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:45:50.188113 | orchestrator | 2025-05-13 19:45:50.189206 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-05-13 19:45:50.189682 | orchestrator | Tuesday 13 May 2025 19:45:50 +0000 (0:00:00.497) 0:07:01.459 *********** 2025-05-13 19:45:50.517503 | orchestrator | ok: [testbed-manager] 2025-05-13 19:45:50.579595 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:45:50.653730 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:45:50.719324 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:45:50.783356 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:45:50.903229 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:45:50.903316 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:45:50.904183 | orchestrator | 2025-05-13 19:45:50.906092 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-05-13 19:45:50.907184 | orchestrator | Tuesday 13 May 2025 19:45:50 +0000 (0:00:00.717) 0:07:02.176 *********** 2025-05-13 19:45:51.030868 | orchestrator | ok: [testbed-manager] 2025-05-13 19:45:51.110823 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:45:51.173555 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:45:51.236130 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:45:51.305129 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:45:51.401609 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:45:51.401706 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:45:51.402164 | orchestrator | 2025-05-13 19:45:51.403222 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-05-13 19:45:51.403952 | orchestrator | Tuesday 13 May 2025 19:45:51 +0000 (0:00:00.496) 0:07:02.673 *********** 2025-05-13 19:45:51.536891 | orchestrator | ok: [testbed-manager] 2025-05-13 19:45:51.604729 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:45:51.678787 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:45:51.748172 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:45:51.815676 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:45:51.925787 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:45:51.933144 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:45:51.933202 | orchestrator | 2025-05-13 19:45:51.933217 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-05-13 19:45:51.933230 | orchestrator | Tuesday 13 May 2025 19:45:51 +0000 (0:00:00.525) 0:07:03.199 *********** 2025-05-13 19:45:57.571726 | orchestrator | ok: [testbed-manager] 2025-05-13 19:45:57.571905 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:45:57.572299 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:45:57.572837 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:45:57.573213 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:45:57.573836 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:45:57.574747 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:45:57.575349 | orchestrator | 2025-05-13 19:45:57.575716 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-05-13 19:45:57.576134 | orchestrator | Tuesday 13 May 2025 19:45:57 +0000 (0:00:05.646) 0:07:08.845 *********** 2025-05-13 19:45:57.717746 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:45:57.777581 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:45:58.035193 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:45:58.105424 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:45:58.168702 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:45:58.302825 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:45:58.303667 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:45:58.304331 | orchestrator | 2025-05-13 19:45:58.305066 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-05-13 19:45:58.305916 | orchestrator | Tuesday 13 May 2025 19:45:58 +0000 (0:00:00.731) 0:07:09.576 *********** 2025-05-13 19:45:59.104354 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:45:59.106413 | orchestrator | 2025-05-13 19:45:59.106995 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-05-13 19:45:59.108210 | orchestrator | Tuesday 13 May 2025 19:45:59 +0000 (0:00:00.798) 0:07:10.375 *********** 2025-05-13 19:46:00.798944 | orchestrator | ok: [testbed-manager] 2025-05-13 19:46:00.799236 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:46:00.799992 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:46:00.801210 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:46:00.802615 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:46:00.803432 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:46:00.804125 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:46:00.804839 | orchestrator | 2025-05-13 19:46:00.805363 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-05-13 19:46:00.806164 | orchestrator | Tuesday 13 May 2025 19:46:00 +0000 (0:00:01.696) 0:07:12.072 *********** 2025-05-13 19:46:01.898177 | orchestrator | ok: [testbed-manager] 2025-05-13 19:46:01.898831 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:46:01.899841 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:46:01.901683 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:46:01.901803 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:46:01.902986 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:46:01.903475 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:46:01.904510 | orchestrator | 2025-05-13 19:46:01.905478 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-05-13 19:46:01.906061 | orchestrator | Tuesday 13 May 2025 19:46:01 +0000 (0:00:01.097) 0:07:13.170 *********** 2025-05-13 19:46:02.335219 | orchestrator | ok: [testbed-manager] 2025-05-13 19:46:02.978838 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:46:02.978987 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:46:02.979074 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:46:02.979089 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:46:02.979821 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:46:02.980760 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:46:02.981062 | orchestrator | 2025-05-13 19:46:02.981737 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-05-13 19:46:02.982373 | orchestrator | Tuesday 13 May 2025 19:46:02 +0000 (0:00:01.082) 0:07:14.252 *********** 2025-05-13 19:46:04.732510 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-13 19:46:04.732872 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-13 19:46:04.734639 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-13 19:46:04.735929 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-13 19:46:04.736831 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-13 19:46:04.737868 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-13 19:46:04.739047 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-13 19:46:04.739410 | orchestrator | 2025-05-13 19:46:04.740258 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-05-13 19:46:04.741049 | orchestrator | Tuesday 13 May 2025 19:46:04 +0000 (0:00:01.750) 0:07:16.002 *********** 2025-05-13 19:46:05.701937 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:46:05.703159 | orchestrator | 2025-05-13 19:46:05.704116 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-05-13 19:46:05.705086 | orchestrator | Tuesday 13 May 2025 19:46:05 +0000 (0:00:00.970) 0:07:16.973 *********** 2025-05-13 19:46:14.234286 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:46:14.235259 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:46:14.237187 | orchestrator | changed: [testbed-manager] 2025-05-13 19:46:14.238095 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:46:14.240756 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:46:14.241615 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:46:14.242795 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:46:14.245122 | orchestrator | 2025-05-13 19:46:14.246404 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-05-13 19:46:14.246444 | orchestrator | Tuesday 13 May 2025 19:46:14 +0000 (0:00:08.532) 0:07:25.505 *********** 2025-05-13 19:46:15.931196 | orchestrator | ok: [testbed-manager] 2025-05-13 19:46:15.931453 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:46:15.935264 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:46:15.935303 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:46:15.935316 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:46:15.935327 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:46:15.936326 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:46:15.936444 | orchestrator | 2025-05-13 19:46:15.938330 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-05-13 19:46:15.939463 | orchestrator | Tuesday 13 May 2025 19:46:15 +0000 (0:00:01.697) 0:07:27.202 *********** 2025-05-13 19:46:17.419285 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:46:17.420075 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:46:17.420789 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:46:17.422055 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:46:17.423476 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:46:17.424191 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:46:17.424875 | orchestrator | 2025-05-13 19:46:17.425498 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-05-13 19:46:17.426102 | orchestrator | Tuesday 13 May 2025 19:46:17 +0000 (0:00:01.487) 0:07:28.690 *********** 2025-05-13 19:46:18.662303 | orchestrator | changed: [testbed-manager] 2025-05-13 19:46:18.662583 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:46:18.662692 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:46:18.663130 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:46:18.664820 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:46:18.667949 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:46:18.668297 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:46:18.669287 | orchestrator | 2025-05-13 19:46:18.669964 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-05-13 19:46:18.671603 | orchestrator | 2025-05-13 19:46:18.672272 | orchestrator | TASK [Include hardening role] ************************************************** 2025-05-13 19:46:18.673282 | orchestrator | Tuesday 13 May 2025 19:46:18 +0000 (0:00:01.242) 0:07:29.933 *********** 2025-05-13 19:46:18.780210 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:46:18.849347 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:46:18.913394 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:46:18.974752 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:46:19.044974 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:46:19.160899 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:46:19.161913 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:46:19.162274 | orchestrator | 2025-05-13 19:46:19.163790 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-05-13 19:46:19.163894 | orchestrator | 2025-05-13 19:46:19.165036 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-05-13 19:46:19.166073 | orchestrator | Tuesday 13 May 2025 19:46:19 +0000 (0:00:00.500) 0:07:30.433 *********** 2025-05-13 19:46:20.471272 | orchestrator | changed: [testbed-manager] 2025-05-13 19:46:20.472084 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:46:20.473303 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:46:20.474213 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:46:20.475466 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:46:20.477557 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:46:20.478149 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:46:20.479213 | orchestrator | 2025-05-13 19:46:20.479774 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-05-13 19:46:20.480784 | orchestrator | Tuesday 13 May 2025 19:46:20 +0000 (0:00:01.307) 0:07:31.741 *********** 2025-05-13 19:46:22.058278 | orchestrator | ok: [testbed-manager] 2025-05-13 19:46:22.058379 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:46:22.058396 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:46:22.058824 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:46:22.059708 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:46:22.059851 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:46:22.060465 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:46:22.060984 | orchestrator | 2025-05-13 19:46:22.061560 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-05-13 19:46:22.061999 | orchestrator | Tuesday 13 May 2025 19:46:22 +0000 (0:00:01.587) 0:07:33.328 *********** 2025-05-13 19:46:22.190751 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:46:22.255990 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:46:22.320004 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:46:22.391033 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:46:22.448123 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:46:22.857422 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:46:22.858082 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:46:22.862109 | orchestrator | 2025-05-13 19:46:22.862138 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-05-13 19:46:22.862152 | orchestrator | Tuesday 13 May 2025 19:46:22 +0000 (0:00:00.799) 0:07:34.127 *********** 2025-05-13 19:46:24.167274 | orchestrator | changed: [testbed-manager] 2025-05-13 19:46:24.167449 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:46:24.167630 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:46:24.168006 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:46:24.168437 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:46:24.168944 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:46:24.169241 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:46:24.173767 | orchestrator | 2025-05-13 19:46:24.175773 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-05-13 19:46:24.176865 | orchestrator | 2025-05-13 19:46:24.178287 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-05-13 19:46:24.179391 | orchestrator | Tuesday 13 May 2025 19:46:24 +0000 (0:00:01.312) 0:07:35.440 *********** 2025-05-13 19:46:25.170817 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:46:25.170959 | orchestrator | 2025-05-13 19:46:25.172489 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-13 19:46:25.173930 | orchestrator | Tuesday 13 May 2025 19:46:25 +0000 (0:00:00.996) 0:07:36.436 *********** 2025-05-13 19:46:25.625005 | orchestrator | ok: [testbed-manager] 2025-05-13 19:46:25.845055 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:46:26.265783 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:46:26.267095 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:46:26.269809 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:46:26.269835 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:46:26.270646 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:46:26.271355 | orchestrator | 2025-05-13 19:46:26.272429 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-13 19:46:26.273021 | orchestrator | Tuesday 13 May 2025 19:46:26 +0000 (0:00:01.100) 0:07:37.537 *********** 2025-05-13 19:46:27.441987 | orchestrator | changed: [testbed-manager] 2025-05-13 19:46:27.442350 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:46:27.443657 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:46:27.444512 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:46:27.446230 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:46:27.446975 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:46:27.447680 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:46:27.448747 | orchestrator | 2025-05-13 19:46:27.449484 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-05-13 19:46:27.450053 | orchestrator | Tuesday 13 May 2025 19:46:27 +0000 (0:00:01.174) 0:07:38.711 *********** 2025-05-13 19:46:28.459716 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:46:28.461122 | orchestrator | 2025-05-13 19:46:28.463088 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-13 19:46:28.464473 | orchestrator | Tuesday 13 May 2025 19:46:28 +0000 (0:00:01.019) 0:07:39.731 *********** 2025-05-13 19:46:28.880153 | orchestrator | ok: [testbed-manager] 2025-05-13 19:46:29.311666 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:46:29.312572 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:46:29.313376 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:46:29.314080 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:46:29.314979 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:46:29.315298 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:46:29.315948 | orchestrator | 2025-05-13 19:46:29.317207 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-13 19:46:29.317236 | orchestrator | Tuesday 13 May 2025 19:46:29 +0000 (0:00:00.853) 0:07:40.585 *********** 2025-05-13 19:46:30.482090 | orchestrator | changed: [testbed-manager] 2025-05-13 19:46:30.482872 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:46:30.485304 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:46:30.486136 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:46:30.487871 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:46:30.489490 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:46:30.491113 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:46:30.491792 | orchestrator | 2025-05-13 19:46:30.494003 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:46:30.494114 | orchestrator | 2025-05-13 19:46:30 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:46:30.494130 | orchestrator | 2025-05-13 19:46:30 | INFO  | Please wait and do not abort execution. 2025-05-13 19:46:30.494376 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-05-13 19:46:30.495350 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-13 19:46:30.495856 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-13 19:46:30.496821 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-13 19:46:30.497756 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-13 19:46:30.498177 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-13 19:46:30.498826 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-13 19:46:30.499584 | orchestrator | 2025-05-13 19:46:30.500006 | orchestrator | 2025-05-13 19:46:30.500924 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 19:46:30.501809 | orchestrator | Tuesday 13 May 2025 19:46:30 +0000 (0:00:01.167) 0:07:41.752 *********** 2025-05-13 19:46:30.502360 | orchestrator | =============================================================================== 2025-05-13 19:46:30.503140 | orchestrator | osism.commons.packages : Install required packages --------------------- 76.43s 2025-05-13 19:46:30.503637 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.97s 2025-05-13 19:46:30.504027 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.76s 2025-05-13 19:46:30.504902 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.27s 2025-05-13 19:46:30.505362 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.75s 2025-05-13 19:46:30.505963 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.59s 2025-05-13 19:46:30.506829 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.02s 2025-05-13 19:46:30.506919 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.26s 2025-05-13 19:46:30.507258 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.53s 2025-05-13 19:46:30.508045 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.26s 2025-05-13 19:46:30.508498 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.10s 2025-05-13 19:46:30.508820 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.63s 2025-05-13 19:46:30.509207 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.32s 2025-05-13 19:46:30.509649 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.26s 2025-05-13 19:46:30.510001 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.24s 2025-05-13 19:46:30.510369 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.11s 2025-05-13 19:46:30.510715 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 6.56s 2025-05-13 19:46:30.511295 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.05s 2025-05-13 19:46:30.511674 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.65s 2025-05-13 19:46:30.512063 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.58s 2025-05-13 19:46:31.579310 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-13 19:46:31.579401 | orchestrator | + osism apply network 2025-05-13 19:46:33.889973 | orchestrator | 2025-05-13 19:46:33 | INFO  | Task 84afc0ad-c2c0-40a9-a498-76254832e3a1 (network) was prepared for execution. 2025-05-13 19:46:33.890134 | orchestrator | 2025-05-13 19:46:33 | INFO  | It takes a moment until task 84afc0ad-c2c0-40a9-a498-76254832e3a1 (network) has been started and output is visible here. 2025-05-13 19:46:38.157721 | orchestrator | 2025-05-13 19:46:38.157906 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-05-13 19:46:38.159455 | orchestrator | 2025-05-13 19:46:38.160811 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-05-13 19:46:38.161835 | orchestrator | Tuesday 13 May 2025 19:46:38 +0000 (0:00:00.268) 0:00:00.268 *********** 2025-05-13 19:46:38.312805 | orchestrator | ok: [testbed-manager] 2025-05-13 19:46:38.391768 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:46:38.466558 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:46:38.548651 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:46:38.739047 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:46:38.878828 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:46:38.879315 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:46:38.880317 | orchestrator | 2025-05-13 19:46:38.884278 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-05-13 19:46:38.885062 | orchestrator | Tuesday 13 May 2025 19:46:38 +0000 (0:00:00.719) 0:00:00.987 *********** 2025-05-13 19:46:40.064853 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:46:40.065334 | orchestrator | 2025-05-13 19:46:40.065995 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-05-13 19:46:40.067124 | orchestrator | Tuesday 13 May 2025 19:46:40 +0000 (0:00:01.184) 0:00:02.172 *********** 2025-05-13 19:46:41.927914 | orchestrator | ok: [testbed-manager] 2025-05-13 19:46:41.928023 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:46:41.928911 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:46:41.929730 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:46:41.930404 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:46:41.933549 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:46:41.933575 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:46:41.933587 | orchestrator | 2025-05-13 19:46:41.933600 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-05-13 19:46:41.933613 | orchestrator | Tuesday 13 May 2025 19:46:41 +0000 (0:00:01.868) 0:00:04.040 *********** 2025-05-13 19:46:43.621933 | orchestrator | ok: [testbed-manager] 2025-05-13 19:46:43.622510 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:46:43.624641 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:46:43.624668 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:46:43.624864 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:46:43.625434 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:46:43.626357 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:46:43.626739 | orchestrator | 2025-05-13 19:46:43.627076 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-05-13 19:46:43.628082 | orchestrator | Tuesday 13 May 2025 19:46:43 +0000 (0:00:01.687) 0:00:05.728 *********** 2025-05-13 19:46:44.546919 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-05-13 19:46:44.549026 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-05-13 19:46:44.550784 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-05-13 19:46:44.551276 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-05-13 19:46:44.552133 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-05-13 19:46:44.553184 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-05-13 19:46:44.553777 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-05-13 19:46:44.554861 | orchestrator | 2025-05-13 19:46:44.555811 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-05-13 19:46:44.556915 | orchestrator | Tuesday 13 May 2025 19:46:44 +0000 (0:00:00.929) 0:00:06.658 *********** 2025-05-13 19:46:47.931158 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 19:46:47.931293 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-13 19:46:47.932589 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-13 19:46:47.935132 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-13 19:46:47.935789 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-13 19:46:47.936747 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-13 19:46:47.937873 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-13 19:46:47.938592 | orchestrator | 2025-05-13 19:46:47.939132 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-05-13 19:46:47.940790 | orchestrator | Tuesday 13 May 2025 19:46:47 +0000 (0:00:03.381) 0:00:10.040 *********** 2025-05-13 19:46:49.547373 | orchestrator | changed: [testbed-manager] 2025-05-13 19:46:49.547834 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:46:49.549022 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:46:49.550455 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:46:49.552202 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:46:49.552231 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:46:49.553568 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:46:49.554283 | orchestrator | 2025-05-13 19:46:49.555134 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-05-13 19:46:49.555783 | orchestrator | Tuesday 13 May 2025 19:46:49 +0000 (0:00:01.615) 0:00:11.655 *********** 2025-05-13 19:46:51.340039 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-13 19:46:51.340142 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 19:46:51.341135 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-13 19:46:51.341893 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-13 19:46:51.342729 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-13 19:46:51.343987 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-13 19:46:51.344270 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-13 19:46:51.344797 | orchestrator | 2025-05-13 19:46:51.347366 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-05-13 19:46:51.347393 | orchestrator | Tuesday 13 May 2025 19:46:51 +0000 (0:00:01.796) 0:00:13.452 *********** 2025-05-13 19:46:51.744855 | orchestrator | ok: [testbed-manager] 2025-05-13 19:46:52.013750 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:46:52.429257 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:46:52.433521 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:46:52.433707 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:46:52.433830 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:46:52.435160 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:46:52.435784 | orchestrator | 2025-05-13 19:46:52.436644 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-05-13 19:46:52.436958 | orchestrator | Tuesday 13 May 2025 19:46:52 +0000 (0:00:01.085) 0:00:14.537 *********** 2025-05-13 19:46:52.602345 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:46:52.684458 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:46:52.766118 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:46:52.845376 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:46:52.920513 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:46:53.058624 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:46:53.059184 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:46:53.060512 | orchestrator | 2025-05-13 19:46:53.062206 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-05-13 19:46:53.062530 | orchestrator | Tuesday 13 May 2025 19:46:53 +0000 (0:00:00.633) 0:00:15.171 *********** 2025-05-13 19:46:55.105687 | orchestrator | ok: [testbed-manager] 2025-05-13 19:46:55.107710 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:46:55.108830 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:46:55.109924 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:46:55.111293 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:46:55.112169 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:46:55.113379 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:46:55.113938 | orchestrator | 2025-05-13 19:46:55.114972 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-05-13 19:46:55.116019 | orchestrator | Tuesday 13 May 2025 19:46:55 +0000 (0:00:02.041) 0:00:17.212 *********** 2025-05-13 19:46:55.356634 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:46:55.437175 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:46:55.528442 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:46:55.612816 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:46:56.051856 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:46:56.052956 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:46:56.053452 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-05-13 19:46:56.055297 | orchestrator | 2025-05-13 19:46:56.055332 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-05-13 19:46:56.056524 | orchestrator | Tuesday 13 May 2025 19:46:56 +0000 (0:00:00.952) 0:00:18.164 *********** 2025-05-13 19:46:57.676467 | orchestrator | ok: [testbed-manager] 2025-05-13 19:46:57.676590 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:46:57.677806 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:46:57.678584 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:46:57.679388 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:46:57.680048 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:46:57.682416 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:46:57.683317 | orchestrator | 2025-05-13 19:46:57.684593 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-05-13 19:46:57.686455 | orchestrator | Tuesday 13 May 2025 19:46:57 +0000 (0:00:01.619) 0:00:19.783 *********** 2025-05-13 19:46:58.961716 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:46:58.961827 | orchestrator | 2025-05-13 19:46:58.961844 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-13 19:46:58.961857 | orchestrator | Tuesday 13 May 2025 19:46:58 +0000 (0:00:01.280) 0:00:21.064 *********** 2025-05-13 19:46:59.661387 | orchestrator | ok: [testbed-manager] 2025-05-13 19:47:00.275827 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:47:00.279027 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:47:00.284494 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:47:00.284590 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:47:00.284611 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:47:00.286849 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:47:00.287705 | orchestrator | 2025-05-13 19:47:00.289621 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-05-13 19:47:00.290396 | orchestrator | Tuesday 13 May 2025 19:47:00 +0000 (0:00:01.319) 0:00:22.384 *********** 2025-05-13 19:47:00.451385 | orchestrator | ok: [testbed-manager] 2025-05-13 19:47:00.536029 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:47:00.618877 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:47:00.705829 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:47:00.786377 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:47:00.928409 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:47:00.928730 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:47:00.929897 | orchestrator | 2025-05-13 19:47:00.931174 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-13 19:47:00.932116 | orchestrator | Tuesday 13 May 2025 19:47:00 +0000 (0:00:00.650) 0:00:23.035 *********** 2025-05-13 19:47:01.522585 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-13 19:47:01.523346 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-05-13 19:47:01.526144 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-13 19:47:01.526194 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-05-13 19:47:01.526246 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-13 19:47:01.526331 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-05-13 19:47:01.629916 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-13 19:47:01.630535 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-05-13 19:47:01.631327 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-13 19:47:01.632112 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-05-13 19:47:02.108800 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-13 19:47:02.108895 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-05-13 19:47:02.108909 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-13 19:47:02.108920 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-05-13 19:47:02.110925 | orchestrator | 2025-05-13 19:47:02.111479 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-05-13 19:47:02.112017 | orchestrator | Tuesday 13 May 2025 19:47:02 +0000 (0:00:01.172) 0:00:24.208 *********** 2025-05-13 19:47:02.260105 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:47:02.336832 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:47:02.417862 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:47:02.496023 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:47:02.573155 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:47:02.701068 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:47:02.701697 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:47:02.703071 | orchestrator | 2025-05-13 19:47:02.706903 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-05-13 19:47:02.708477 | orchestrator | Tuesday 13 May 2025 19:47:02 +0000 (0:00:00.605) 0:00:24.813 *********** 2025-05-13 19:47:06.397934 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-5, testbed-node-3, testbed-node-2, testbed-node-4 2025-05-13 19:47:06.398214 | orchestrator | 2025-05-13 19:47:06.399170 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-05-13 19:47:06.399797 | orchestrator | Tuesday 13 May 2025 19:47:06 +0000 (0:00:03.692) 0:00:28.506 *********** 2025-05-13 19:47:11.122624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-05-13 19:47:11.123075 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-05-13 19:47:11.127595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-05-13 19:47:11.127640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-05-13 19:47:11.127654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-05-13 19:47:11.127666 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-05-13 19:47:11.128144 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-05-13 19:47:11.128285 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-05-13 19:47:11.131498 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-05-13 19:47:11.132517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-05-13 19:47:11.133590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-05-13 19:47:11.136216 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-05-13 19:47:11.137133 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-05-13 19:47:11.138644 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-05-13 19:47:11.140123 | orchestrator | 2025-05-13 19:47:11.140898 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-05-13 19:47:11.142354 | orchestrator | Tuesday 13 May 2025 19:47:11 +0000 (0:00:04.723) 0:00:33.229 *********** 2025-05-13 19:47:16.100866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-05-13 19:47:16.101093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-05-13 19:47:16.101975 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-05-13 19:47:16.102542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-05-13 19:47:16.105682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-05-13 19:47:16.105720 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-05-13 19:47:16.105763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-05-13 19:47:16.105776 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-05-13 19:47:16.107084 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-05-13 19:47:16.107118 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-05-13 19:47:16.108050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-05-13 19:47:16.109103 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-05-13 19:47:16.109491 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-05-13 19:47:16.109732 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-05-13 19:47:16.110360 | orchestrator | 2025-05-13 19:47:16.110668 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-05-13 19:47:16.111090 | orchestrator | Tuesday 13 May 2025 19:47:16 +0000 (0:00:04.980) 0:00:38.209 *********** 2025-05-13 19:47:17.318483 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:47:17.319611 | orchestrator | 2025-05-13 19:47:17.322328 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-13 19:47:17.322362 | orchestrator | Tuesday 13 May 2025 19:47:17 +0000 (0:00:01.217) 0:00:39.427 *********** 2025-05-13 19:47:17.764967 | orchestrator | ok: [testbed-manager] 2025-05-13 19:47:17.852708 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:47:18.284144 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:47:18.285460 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:47:18.287710 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:47:18.287854 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:47:18.289037 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:47:18.290762 | orchestrator | 2025-05-13 19:47:18.291092 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-13 19:47:18.292072 | orchestrator | Tuesday 13 May 2025 19:47:18 +0000 (0:00:00.967) 0:00:40.394 *********** 2025-05-13 19:47:18.388230 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-13 19:47:18.388914 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-13 19:47:18.392297 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-13 19:47:18.392325 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-13 19:47:18.493584 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:47:18.493657 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-13 19:47:18.493760 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-13 19:47:18.494545 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-13 19:47:18.495091 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-13 19:47:18.581164 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:47:18.581857 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-13 19:47:18.583824 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-13 19:47:18.583906 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-13 19:47:18.860420 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-13 19:47:18.865649 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-13 19:47:18.865703 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-13 19:47:18.865715 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-13 19:47:18.865768 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-13 19:47:18.955007 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:47:18.955840 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-13 19:47:18.957769 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-13 19:47:18.959463 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-13 19:47:18.960993 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-13 19:47:19.050686 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:47:19.052806 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-13 19:47:19.053766 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-13 19:47:19.054799 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-13 19:47:19.055420 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-13 19:47:20.302482 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:47:20.307120 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:47:20.307183 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-13 19:47:20.309198 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-13 19:47:20.311069 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-13 19:47:20.312736 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-13 19:47:20.313889 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:47:20.315017 | orchestrator | 2025-05-13 19:47:20.316796 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-05-13 19:47:20.317892 | orchestrator | Tuesday 13 May 2025 19:47:20 +0000 (0:00:02.015) 0:00:42.410 *********** 2025-05-13 19:47:20.468160 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:47:20.550816 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:47:20.628809 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:47:20.712835 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:47:20.797163 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:47:20.910782 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:47:20.912267 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:47:20.914006 | orchestrator | 2025-05-13 19:47:20.915576 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-05-13 19:47:20.916700 | orchestrator | Tuesday 13 May 2025 19:47:20 +0000 (0:00:00.612) 0:00:43.023 *********** 2025-05-13 19:47:21.254138 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:47:21.334963 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:47:21.411348 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:47:21.494488 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:47:21.578271 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:47:21.616345 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:47:21.617186 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:47:21.618061 | orchestrator | 2025-05-13 19:47:21.619233 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:47:21.619721 | orchestrator | 2025-05-13 19:47:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:47:21.619828 | orchestrator | 2025-05-13 19:47:21 | INFO  | Please wait and do not abort execution. 2025-05-13 19:47:21.622927 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 19:47:21.623290 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-13 19:47:21.624095 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-13 19:47:21.624657 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-13 19:47:21.626157 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-13 19:47:21.626188 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-13 19:47:21.626265 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-13 19:47:21.627317 | orchestrator | 2025-05-13 19:47:21.628445 | orchestrator | 2025-05-13 19:47:21.629858 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 19:47:21.630634 | orchestrator | Tuesday 13 May 2025 19:47:21 +0000 (0:00:00.705) 0:00:43.729 *********** 2025-05-13 19:47:21.631613 | orchestrator | =============================================================================== 2025-05-13 19:47:21.632221 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.98s 2025-05-13 19:47:21.632922 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.72s 2025-05-13 19:47:21.633681 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.69s 2025-05-13 19:47:21.634705 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.38s 2025-05-13 19:47:21.635221 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.04s 2025-05-13 19:47:21.636331 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.02s 2025-05-13 19:47:21.636976 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.87s 2025-05-13 19:47:21.638119 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.80s 2025-05-13 19:47:21.638343 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.69s 2025-05-13 19:47:21.639019 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.62s 2025-05-13 19:47:21.639917 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.62s 2025-05-13 19:47:21.640157 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.32s 2025-05-13 19:47:21.641056 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.28s 2025-05-13 19:47:21.641255 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.22s 2025-05-13 19:47:21.641875 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.18s 2025-05-13 19:47:21.642475 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.17s 2025-05-13 19:47:21.642976 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.09s 2025-05-13 19:47:21.644175 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.97s 2025-05-13 19:47:21.644476 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.95s 2025-05-13 19:47:21.645323 | orchestrator | osism.commons.network : Create required directories --------------------- 0.93s 2025-05-13 19:47:22.261610 | orchestrator | + osism apply wireguard 2025-05-13 19:47:24.010901 | orchestrator | 2025-05-13 19:47:24 | INFO  | Task b6cbd368-13ff-47a0-9cad-d05c23899ec4 (wireguard) was prepared for execution. 2025-05-13 19:47:24.011002 | orchestrator | 2025-05-13 19:47:24 | INFO  | It takes a moment until task b6cbd368-13ff-47a0-9cad-d05c23899ec4 (wireguard) has been started and output is visible here. 2025-05-13 19:47:28.074548 | orchestrator | 2025-05-13 19:47:28.075047 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-05-13 19:47:28.075477 | orchestrator | 2025-05-13 19:47:28.076087 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-05-13 19:47:28.076797 | orchestrator | Tuesday 13 May 2025 19:47:28 +0000 (0:00:00.231) 0:00:00.231 *********** 2025-05-13 19:47:29.581774 | orchestrator | ok: [testbed-manager] 2025-05-13 19:47:29.581874 | orchestrator | 2025-05-13 19:47:29.581889 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-05-13 19:47:29.581903 | orchestrator | Tuesday 13 May 2025 19:47:29 +0000 (0:00:01.506) 0:00:01.738 *********** 2025-05-13 19:47:35.941103 | orchestrator | changed: [testbed-manager] 2025-05-13 19:47:35.941673 | orchestrator | 2025-05-13 19:47:35.943804 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-05-13 19:47:35.944620 | orchestrator | Tuesday 13 May 2025 19:47:35 +0000 (0:00:06.360) 0:00:08.099 *********** 2025-05-13 19:47:36.505955 | orchestrator | changed: [testbed-manager] 2025-05-13 19:47:36.506841 | orchestrator | 2025-05-13 19:47:36.507248 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-05-13 19:47:36.508069 | orchestrator | Tuesday 13 May 2025 19:47:36 +0000 (0:00:00.567) 0:00:08.666 *********** 2025-05-13 19:47:36.916806 | orchestrator | changed: [testbed-manager] 2025-05-13 19:47:36.917785 | orchestrator | 2025-05-13 19:47:36.919149 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-05-13 19:47:36.919836 | orchestrator | Tuesday 13 May 2025 19:47:36 +0000 (0:00:00.409) 0:00:09.076 *********** 2025-05-13 19:47:37.560473 | orchestrator | ok: [testbed-manager] 2025-05-13 19:47:37.563670 | orchestrator | 2025-05-13 19:47:37.564833 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-05-13 19:47:37.565529 | orchestrator | Tuesday 13 May 2025 19:47:37 +0000 (0:00:00.642) 0:00:09.718 *********** 2025-05-13 19:47:37.959925 | orchestrator | ok: [testbed-manager] 2025-05-13 19:47:37.960216 | orchestrator | 2025-05-13 19:47:37.960941 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-05-13 19:47:37.962854 | orchestrator | Tuesday 13 May 2025 19:47:37 +0000 (0:00:00.400) 0:00:10.119 *********** 2025-05-13 19:47:38.362169 | orchestrator | ok: [testbed-manager] 2025-05-13 19:47:38.364316 | orchestrator | 2025-05-13 19:47:38.364358 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-05-13 19:47:38.364746 | orchestrator | Tuesday 13 May 2025 19:47:38 +0000 (0:00:00.400) 0:00:10.519 *********** 2025-05-13 19:47:39.659953 | orchestrator | changed: [testbed-manager] 2025-05-13 19:47:39.660215 | orchestrator | 2025-05-13 19:47:39.661087 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-05-13 19:47:39.661740 | orchestrator | Tuesday 13 May 2025 19:47:39 +0000 (0:00:01.298) 0:00:11.817 *********** 2025-05-13 19:47:40.607134 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-13 19:47:40.690722 | orchestrator | changed: [testbed-manager] 2025-05-13 19:47:40.690793 | orchestrator | 2025-05-13 19:47:40.690808 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-05-13 19:47:40.690822 | orchestrator | Tuesday 13 May 2025 19:47:40 +0000 (0:00:00.947) 0:00:12.765 *********** 2025-05-13 19:47:42.280144 | orchestrator | changed: [testbed-manager] 2025-05-13 19:47:42.281124 | orchestrator | 2025-05-13 19:47:42.284793 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-05-13 19:47:42.286350 | orchestrator | Tuesday 13 May 2025 19:47:42 +0000 (0:00:01.674) 0:00:14.439 *********** 2025-05-13 19:47:43.234996 | orchestrator | changed: [testbed-manager] 2025-05-13 19:47:43.235347 | orchestrator | 2025-05-13 19:47:43.236638 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:47:43.237158 | orchestrator | 2025-05-13 19:47:43 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:47:43.237277 | orchestrator | 2025-05-13 19:47:43 | INFO  | Please wait and do not abort execution. 2025-05-13 19:47:43.238140 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:47:43.238706 | orchestrator | 2025-05-13 19:47:43.239641 | orchestrator | 2025-05-13 19:47:43.240280 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 19:47:43.241124 | orchestrator | Tuesday 13 May 2025 19:47:43 +0000 (0:00:00.954) 0:00:15.393 *********** 2025-05-13 19:47:43.242097 | orchestrator | =============================================================================== 2025-05-13 19:47:43.242680 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.36s 2025-05-13 19:47:43.243819 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.67s 2025-05-13 19:47:43.244795 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.51s 2025-05-13 19:47:43.245564 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.30s 2025-05-13 19:47:43.246292 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.95s 2025-05-13 19:47:43.247061 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.95s 2025-05-13 19:47:43.247804 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.64s 2025-05-13 19:47:43.248672 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2025-05-13 19:47:43.249618 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.41s 2025-05-13 19:47:43.250182 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.40s 2025-05-13 19:47:43.250713 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.40s 2025-05-13 19:47:43.819564 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-05-13 19:47:43.851921 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-05-13 19:47:43.852068 | orchestrator | Dload Upload Total Spent Left Speed 2025-05-13 19:47:43.937638 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 176 0 --:--:-- --:--:-- --:--:-- 176 2025-05-13 19:47:43.952700 | orchestrator | + osism apply --environment custom workarounds 2025-05-13 19:47:45.655622 | orchestrator | 2025-05-13 19:47:45 | INFO  | Trying to run play workarounds in environment custom 2025-05-13 19:47:45.714485 | orchestrator | 2025-05-13 19:47:45 | INFO  | Task c29c4e8f-352f-4ddb-8438-c86df67714e3 (workarounds) was prepared for execution. 2025-05-13 19:47:45.714623 | orchestrator | 2025-05-13 19:47:45 | INFO  | It takes a moment until task c29c4e8f-352f-4ddb-8438-c86df67714e3 (workarounds) has been started and output is visible here. 2025-05-13 19:47:49.658335 | orchestrator | 2025-05-13 19:47:49.661540 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 19:47:49.661719 | orchestrator | 2025-05-13 19:47:49.661736 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-05-13 19:47:49.661745 | orchestrator | Tuesday 13 May 2025 19:47:49 +0000 (0:00:00.148) 0:00:00.148 *********** 2025-05-13 19:47:49.854650 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-05-13 19:47:49.936519 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-05-13 19:47:50.018165 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-05-13 19:47:50.103486 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-05-13 19:47:50.306142 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-05-13 19:47:50.477091 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-05-13 19:47:50.477380 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-05-13 19:47:50.478156 | orchestrator | 2025-05-13 19:47:50.479177 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-05-13 19:47:50.480199 | orchestrator | 2025-05-13 19:47:50.480220 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-13 19:47:50.480555 | orchestrator | Tuesday 13 May 2025 19:47:50 +0000 (0:00:00.821) 0:00:00.970 *********** 2025-05-13 19:47:53.185941 | orchestrator | ok: [testbed-manager] 2025-05-13 19:47:53.186148 | orchestrator | 2025-05-13 19:47:53.187051 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-05-13 19:47:53.188486 | orchestrator | 2025-05-13 19:47:53.189075 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-13 19:47:53.193297 | orchestrator | Tuesday 13 May 2025 19:47:53 +0000 (0:00:02.702) 0:00:03.672 *********** 2025-05-13 19:47:55.095277 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:47:55.095409 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:47:55.096019 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:47:55.099364 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:47:55.099390 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:47:55.099402 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:47:55.099448 | orchestrator | 2025-05-13 19:47:55.099464 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-05-13 19:47:55.099522 | orchestrator | 2025-05-13 19:47:55.100105 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-05-13 19:47:55.101148 | orchestrator | Tuesday 13 May 2025 19:47:55 +0000 (0:00:01.911) 0:00:05.584 *********** 2025-05-13 19:47:56.602833 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-13 19:47:56.603281 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-13 19:47:56.605729 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-13 19:47:56.606935 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-13 19:47:56.607498 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-13 19:47:56.608161 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-13 19:47:56.608998 | orchestrator | 2025-05-13 19:47:56.609664 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-05-13 19:47:56.610288 | orchestrator | Tuesday 13 May 2025 19:47:56 +0000 (0:00:01.507) 0:00:07.091 *********** 2025-05-13 19:48:00.434613 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:48:00.436996 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:48:00.438346 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:48:00.439071 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:48:00.440050 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:48:00.440130 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:48:00.440814 | orchestrator | 2025-05-13 19:48:00.441413 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-05-13 19:48:00.442100 | orchestrator | Tuesday 13 May 2025 19:48:00 +0000 (0:00:03.835) 0:00:10.926 *********** 2025-05-13 19:48:00.636456 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:48:00.714902 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:48:00.792926 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:48:00.871203 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:48:01.210204 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:48:01.210305 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:48:01.210322 | orchestrator | 2025-05-13 19:48:01.210759 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-05-13 19:48:01.211629 | orchestrator | 2025-05-13 19:48:01.214142 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-05-13 19:48:01.214407 | orchestrator | Tuesday 13 May 2025 19:48:01 +0000 (0:00:00.771) 0:00:11.698 *********** 2025-05-13 19:48:02.926794 | orchestrator | changed: [testbed-manager] 2025-05-13 19:48:02.928460 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:48:02.929699 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:48:02.931785 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:48:02.933379 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:48:02.934403 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:48:02.935387 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:48:02.936217 | orchestrator | 2025-05-13 19:48:02.937313 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-05-13 19:48:02.938467 | orchestrator | Tuesday 13 May 2025 19:48:02 +0000 (0:00:01.719) 0:00:13.417 *********** 2025-05-13 19:48:04.523271 | orchestrator | changed: [testbed-manager] 2025-05-13 19:48:04.523823 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:48:04.524220 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:48:04.524454 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:48:04.526988 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:48:04.528633 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:48:04.529473 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:48:04.533924 | orchestrator | 2025-05-13 19:48:04.535087 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-05-13 19:48:04.541874 | orchestrator | Tuesday 13 May 2025 19:48:04 +0000 (0:00:01.589) 0:00:15.007 *********** 2025-05-13 19:48:06.039132 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:48:06.039250 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:48:06.039267 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:48:06.040532 | orchestrator | ok: [testbed-manager] 2025-05-13 19:48:06.041842 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:48:06.044979 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:48:06.045684 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:48:06.046673 | orchestrator | 2025-05-13 19:48:06.047471 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-05-13 19:48:06.048727 | orchestrator | Tuesday 13 May 2025 19:48:06 +0000 (0:00:01.521) 0:00:16.528 *********** 2025-05-13 19:48:07.837931 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:48:07.838378 | orchestrator | changed: [testbed-manager] 2025-05-13 19:48:07.839907 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:48:07.841479 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:48:07.842103 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:48:07.844196 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:48:07.844722 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:48:07.846288 | orchestrator | 2025-05-13 19:48:07.846812 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-05-13 19:48:07.847104 | orchestrator | Tuesday 13 May 2025 19:48:07 +0000 (0:00:01.796) 0:00:18.324 *********** 2025-05-13 19:48:08.011531 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:48:08.088480 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:48:08.167863 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:48:08.254961 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:48:08.324051 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:48:08.461123 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:48:08.462350 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:48:08.463551 | orchestrator | 2025-05-13 19:48:08.464784 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-05-13 19:48:08.465451 | orchestrator | 2025-05-13 19:48:08.466836 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-05-13 19:48:08.467200 | orchestrator | Tuesday 13 May 2025 19:48:08 +0000 (0:00:00.626) 0:00:18.951 *********** 2025-05-13 19:48:11.147535 | orchestrator | ok: [testbed-manager] 2025-05-13 19:48:11.147661 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:48:11.147678 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:48:11.148859 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:48:11.149219 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:48:11.149720 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:48:11.150405 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:48:11.151003 | orchestrator | 2025-05-13 19:48:11.154815 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:48:11.159786 | orchestrator | 2025-05-13 19:48:11 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:48:11.159828 | orchestrator | 2025-05-13 19:48:11 | INFO  | Please wait and do not abort execution. 2025-05-13 19:48:11.160268 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 19:48:11.160995 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:48:11.161592 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:48:11.162122 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:48:11.162622 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:48:11.163240 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:48:11.163590 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:48:11.164122 | orchestrator | 2025-05-13 19:48:11.164565 | orchestrator | 2025-05-13 19:48:11.165002 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 19:48:11.165704 | orchestrator | Tuesday 13 May 2025 19:48:11 +0000 (0:00:02.683) 0:00:21.635 *********** 2025-05-13 19:48:11.167692 | orchestrator | =============================================================================== 2025-05-13 19:48:11.167738 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.84s 2025-05-13 19:48:11.167751 | orchestrator | Apply netplan configuration --------------------------------------------- 2.70s 2025-05-13 19:48:11.167763 | orchestrator | Install python3-docker -------------------------------------------------- 2.68s 2025-05-13 19:48:11.167774 | orchestrator | Apply netplan configuration --------------------------------------------- 1.91s 2025-05-13 19:48:11.168095 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.80s 2025-05-13 19:48:11.168430 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.72s 2025-05-13 19:48:11.168885 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.59s 2025-05-13 19:48:11.169236 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.52s 2025-05-13 19:48:11.169612 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.51s 2025-05-13 19:48:11.170010 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.82s 2025-05-13 19:48:11.170497 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.77s 2025-05-13 19:48:11.170702 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.63s 2025-05-13 19:48:11.803426 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-05-13 19:48:13.550975 | orchestrator | 2025-05-13 19:48:13 | INFO  | Task c78103dd-7f10-4ece-b6c0-3662fc4f8a39 (reboot) was prepared for execution. 2025-05-13 19:48:13.551071 | orchestrator | 2025-05-13 19:48:13 | INFO  | It takes a moment until task c78103dd-7f10-4ece-b6c0-3662fc4f8a39 (reboot) has been started and output is visible here. 2025-05-13 19:48:17.666652 | orchestrator | 2025-05-13 19:48:17.668143 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-13 19:48:17.668450 | orchestrator | 2025-05-13 19:48:17.669504 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-13 19:48:17.671089 | orchestrator | Tuesday 13 May 2025 19:48:17 +0000 (0:00:00.209) 0:00:00.209 *********** 2025-05-13 19:48:17.773626 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:48:17.773798 | orchestrator | 2025-05-13 19:48:17.774902 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-13 19:48:17.775527 | orchestrator | Tuesday 13 May 2025 19:48:17 +0000 (0:00:00.111) 0:00:00.321 *********** 2025-05-13 19:48:18.710087 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:48:18.710249 | orchestrator | 2025-05-13 19:48:18.710336 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-13 19:48:18.711693 | orchestrator | Tuesday 13 May 2025 19:48:18 +0000 (0:00:00.933) 0:00:01.254 *********** 2025-05-13 19:48:18.827771 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:48:18.827893 | orchestrator | 2025-05-13 19:48:18.828076 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-13 19:48:18.828140 | orchestrator | 2025-05-13 19:48:18.828801 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-13 19:48:18.829110 | orchestrator | Tuesday 13 May 2025 19:48:18 +0000 (0:00:00.118) 0:00:01.373 *********** 2025-05-13 19:48:18.935207 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:48:18.935376 | orchestrator | 2025-05-13 19:48:18.936959 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-13 19:48:18.937733 | orchestrator | Tuesday 13 May 2025 19:48:18 +0000 (0:00:00.109) 0:00:01.482 *********** 2025-05-13 19:48:19.583058 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:48:19.583398 | orchestrator | 2025-05-13 19:48:19.584759 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-13 19:48:19.584911 | orchestrator | Tuesday 13 May 2025 19:48:19 +0000 (0:00:00.648) 0:00:02.131 *********** 2025-05-13 19:48:19.687229 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:48:19.688798 | orchestrator | 2025-05-13 19:48:19.689828 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-13 19:48:19.691807 | orchestrator | 2025-05-13 19:48:19.692368 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-13 19:48:19.692911 | orchestrator | Tuesday 13 May 2025 19:48:19 +0000 (0:00:00.101) 0:00:02.232 *********** 2025-05-13 19:48:19.891072 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:48:19.892605 | orchestrator | 2025-05-13 19:48:19.893554 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-13 19:48:19.894498 | orchestrator | Tuesday 13 May 2025 19:48:19 +0000 (0:00:00.205) 0:00:02.438 *********** 2025-05-13 19:48:20.601079 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:48:20.601180 | orchestrator | 2025-05-13 19:48:20.601372 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-13 19:48:20.601991 | orchestrator | Tuesday 13 May 2025 19:48:20 +0000 (0:00:00.710) 0:00:03.149 *********** 2025-05-13 19:48:20.729774 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:48:20.730815 | orchestrator | 2025-05-13 19:48:20.731065 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-13 19:48:20.732141 | orchestrator | 2025-05-13 19:48:20.732451 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-13 19:48:20.733378 | orchestrator | Tuesday 13 May 2025 19:48:20 +0000 (0:00:00.128) 0:00:03.277 *********** 2025-05-13 19:48:20.831070 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:48:20.831160 | orchestrator | 2025-05-13 19:48:20.831791 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-13 19:48:20.832256 | orchestrator | Tuesday 13 May 2025 19:48:20 +0000 (0:00:00.102) 0:00:03.379 *********** 2025-05-13 19:48:21.492985 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:48:21.493829 | orchestrator | 2025-05-13 19:48:21.494711 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-13 19:48:21.496488 | orchestrator | Tuesday 13 May 2025 19:48:21 +0000 (0:00:00.662) 0:00:04.041 *********** 2025-05-13 19:48:21.617761 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:48:21.617859 | orchestrator | 2025-05-13 19:48:21.618353 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-13 19:48:21.618953 | orchestrator | 2025-05-13 19:48:21.625263 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-13 19:48:21.625339 | orchestrator | Tuesday 13 May 2025 19:48:21 +0000 (0:00:00.124) 0:00:04.165 *********** 2025-05-13 19:48:21.724876 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:48:21.726113 | orchestrator | 2025-05-13 19:48:21.727920 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-13 19:48:21.728426 | orchestrator | Tuesday 13 May 2025 19:48:21 +0000 (0:00:00.107) 0:00:04.273 *********** 2025-05-13 19:48:22.390110 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:48:22.391196 | orchestrator | 2025-05-13 19:48:22.392075 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-13 19:48:22.393069 | orchestrator | Tuesday 13 May 2025 19:48:22 +0000 (0:00:00.665) 0:00:04.938 *********** 2025-05-13 19:48:22.503665 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:48:22.504819 | orchestrator | 2025-05-13 19:48:22.505701 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-13 19:48:22.506830 | orchestrator | 2025-05-13 19:48:22.507667 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-13 19:48:22.508564 | orchestrator | Tuesday 13 May 2025 19:48:22 +0000 (0:00:00.113) 0:00:05.051 *********** 2025-05-13 19:48:22.611893 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:48:22.613122 | orchestrator | 2025-05-13 19:48:22.614152 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-13 19:48:22.614878 | orchestrator | Tuesday 13 May 2025 19:48:22 +0000 (0:00:00.106) 0:00:05.158 *********** 2025-05-13 19:48:23.285601 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:48:23.286202 | orchestrator | 2025-05-13 19:48:23.287394 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-13 19:48:23.288018 | orchestrator | Tuesday 13 May 2025 19:48:23 +0000 (0:00:00.673) 0:00:05.832 *********** 2025-05-13 19:48:23.321070 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:48:23.321798 | orchestrator | 2025-05-13 19:48:23.323096 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:48:23.323280 | orchestrator | 2025-05-13 19:48:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:48:23.323516 | orchestrator | 2025-05-13 19:48:23 | INFO  | Please wait and do not abort execution. 2025-05-13 19:48:23.324137 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:48:23.325043 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:48:23.325766 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:48:23.326399 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:48:23.327014 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:48:23.327818 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:48:23.328195 | orchestrator | 2025-05-13 19:48:23.328621 | orchestrator | 2025-05-13 19:48:23.328954 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 19:48:23.329464 | orchestrator | Tuesday 13 May 2025 19:48:23 +0000 (0:00:00.038) 0:00:05.870 *********** 2025-05-13 19:48:23.329861 | orchestrator | =============================================================================== 2025-05-13 19:48:23.330258 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.29s 2025-05-13 19:48:23.331312 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.74s 2025-05-13 19:48:23.331341 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.62s 2025-05-13 19:48:23.973785 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-05-13 19:48:25.769251 | orchestrator | 2025-05-13 19:48:25 | INFO  | Task 3e4f78b0-a544-4a02-a27e-a792fe065a16 (wait-for-connection) was prepared for execution. 2025-05-13 19:48:25.769326 | orchestrator | 2025-05-13 19:48:25 | INFO  | It takes a moment until task 3e4f78b0-a544-4a02-a27e-a792fe065a16 (wait-for-connection) has been started and output is visible here. 2025-05-13 19:48:29.848816 | orchestrator | 2025-05-13 19:48:29.849247 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-05-13 19:48:29.850436 | orchestrator | 2025-05-13 19:48:29.850559 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-05-13 19:48:29.851340 | orchestrator | Tuesday 13 May 2025 19:48:29 +0000 (0:00:00.229) 0:00:00.229 *********** 2025-05-13 19:48:41.575264 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:48:41.575376 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:48:41.575433 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:48:41.575513 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:48:41.576098 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:48:41.576983 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:48:41.578421 | orchestrator | 2025-05-13 19:48:41.579222 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:48:41.579739 | orchestrator | 2025-05-13 19:48:41 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:48:41.579814 | orchestrator | 2025-05-13 19:48:41 | INFO  | Please wait and do not abort execution. 2025-05-13 19:48:41.580814 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:48:41.581399 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:48:41.582001 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:48:41.582775 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:48:41.583693 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:48:41.584362 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:48:41.585380 | orchestrator | 2025-05-13 19:48:41.585838 | orchestrator | 2025-05-13 19:48:41.586686 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 19:48:41.587421 | orchestrator | Tuesday 13 May 2025 19:48:41 +0000 (0:00:11.725) 0:00:11.954 *********** 2025-05-13 19:48:41.588273 | orchestrator | =============================================================================== 2025-05-13 19:48:41.588836 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.73s 2025-05-13 19:48:42.208749 | orchestrator | + osism apply hddtemp 2025-05-13 19:48:43.886622 | orchestrator | 2025-05-13 19:48:43 | INFO  | Task d129607d-0982-438a-b343-b5d2589e9f74 (hddtemp) was prepared for execution. 2025-05-13 19:48:43.886769 | orchestrator | 2025-05-13 19:48:43 | INFO  | It takes a moment until task d129607d-0982-438a-b343-b5d2589e9f74 (hddtemp) has been started and output is visible here. 2025-05-13 19:48:47.881133 | orchestrator | 2025-05-13 19:48:47.881504 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-05-13 19:48:47.884727 | orchestrator | 2025-05-13 19:48:47.885238 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-05-13 19:48:47.885588 | orchestrator | Tuesday 13 May 2025 19:48:47 +0000 (0:00:00.199) 0:00:00.199 *********** 2025-05-13 19:48:47.992406 | orchestrator | ok: [testbed-manager] 2025-05-13 19:48:48.047529 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:48:48.103471 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:48:48.157875 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:48:48.277045 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:48:48.387774 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:48:48.389434 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:48:48.390524 | orchestrator | 2025-05-13 19:48:48.392385 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-05-13 19:48:48.392877 | orchestrator | Tuesday 13 May 2025 19:48:48 +0000 (0:00:00.505) 0:00:00.704 *********** 2025-05-13 19:48:49.396643 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:48:49.397185 | orchestrator | 2025-05-13 19:48:49.398206 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-05-13 19:48:49.399194 | orchestrator | Tuesday 13 May 2025 19:48:49 +0000 (0:00:01.007) 0:00:01.712 *********** 2025-05-13 19:48:51.248993 | orchestrator | ok: [testbed-manager] 2025-05-13 19:48:51.249440 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:48:51.250646 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:48:51.252349 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:48:51.254664 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:48:51.255700 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:48:51.256326 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:48:51.257078 | orchestrator | 2025-05-13 19:48:51.258058 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-05-13 19:48:51.258729 | orchestrator | Tuesday 13 May 2025 19:48:51 +0000 (0:00:01.853) 0:00:03.566 *********** 2025-05-13 19:48:51.785376 | orchestrator | changed: [testbed-manager] 2025-05-13 19:48:51.867966 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:48:51.949664 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:48:52.409638 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:48:52.411096 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:48:52.411147 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:48:52.411160 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:48:52.411508 | orchestrator | 2025-05-13 19:48:52.412445 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-05-13 19:48:52.412628 | orchestrator | Tuesday 13 May 2025 19:48:52 +0000 (0:00:01.155) 0:00:04.721 *********** 2025-05-13 19:48:53.493282 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:48:53.493446 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:48:53.493677 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:48:53.494246 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:48:53.495152 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:48:53.495312 | orchestrator | ok: [testbed-manager] 2025-05-13 19:48:53.495584 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:48:53.497273 | orchestrator | 2025-05-13 19:48:53.497296 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-05-13 19:48:53.497309 | orchestrator | Tuesday 13 May 2025 19:48:53 +0000 (0:00:01.085) 0:00:05.807 *********** 2025-05-13 19:48:53.935967 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:48:54.016983 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:48:54.101114 | orchestrator | changed: [testbed-manager] 2025-05-13 19:48:54.184479 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:48:54.319771 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:48:54.320714 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:48:54.321128 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:48:54.324552 | orchestrator | 2025-05-13 19:48:54.324611 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-05-13 19:48:54.324624 | orchestrator | Tuesday 13 May 2025 19:48:54 +0000 (0:00:00.830) 0:00:06.638 *********** 2025-05-13 19:49:06.553337 | orchestrator | changed: [testbed-manager] 2025-05-13 19:49:06.554492 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:49:06.554964 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:49:06.556873 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:49:06.557394 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:49:06.558395 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:49:06.559359 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:49:06.560095 | orchestrator | 2025-05-13 19:49:06.561243 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-05-13 19:49:06.562744 | orchestrator | Tuesday 13 May 2025 19:49:06 +0000 (0:00:12.227) 0:00:18.865 *********** 2025-05-13 19:49:07.774159 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:49:07.775794 | orchestrator | 2025-05-13 19:49:07.775830 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-05-13 19:49:07.778927 | orchestrator | Tuesday 13 May 2025 19:49:07 +0000 (0:00:01.223) 0:00:20.089 *********** 2025-05-13 19:49:09.596248 | orchestrator | changed: [testbed-manager] 2025-05-13 19:49:09.596366 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:49:09.597094 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:49:09.598011 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:49:09.598839 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:49:09.600887 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:49:09.603043 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:49:09.604514 | orchestrator | 2025-05-13 19:49:09.607494 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:49:09.608085 | orchestrator | 2025-05-13 19:49:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:49:09.608282 | orchestrator | 2025-05-13 19:49:09 | INFO  | Please wait and do not abort execution. 2025-05-13 19:49:09.610114 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:49:09.610844 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 19:49:09.612101 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 19:49:09.612402 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 19:49:09.613197 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 19:49:09.615466 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 19:49:09.615492 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 19:49:09.615764 | orchestrator | 2025-05-13 19:49:09.616753 | orchestrator | 2025-05-13 19:49:09.617063 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 19:49:09.617751 | orchestrator | Tuesday 13 May 2025 19:49:09 +0000 (0:00:01.825) 0:00:21.914 *********** 2025-05-13 19:49:09.618743 | orchestrator | =============================================================================== 2025-05-13 19:49:09.618826 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.23s 2025-05-13 19:49:09.619513 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.85s 2025-05-13 19:49:09.620091 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.83s 2025-05-13 19:49:09.620504 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.22s 2025-05-13 19:49:09.621862 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.16s 2025-05-13 19:49:09.622248 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.09s 2025-05-13 19:49:09.622678 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.01s 2025-05-13 19:49:09.623070 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.83s 2025-05-13 19:49:09.623601 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.51s 2025-05-13 19:49:10.201752 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-05-13 19:49:36.473086 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-13 19:49:36.473233 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-13 19:49:36.473263 | orchestrator | + local max_attempts=60 2025-05-13 19:49:36.473285 | orchestrator | + local name=ceph-ansible 2025-05-13 19:49:36.473306 | orchestrator | + local attempt_num=1 2025-05-13 19:49:36.473325 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-13 19:49:36.501695 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-13 19:49:36.501761 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-13 19:49:36.501797 | orchestrator | + local max_attempts=60 2025-05-13 19:49:36.501810 | orchestrator | + local name=kolla-ansible 2025-05-13 19:49:36.501822 | orchestrator | + local attempt_num=1 2025-05-13 19:49:36.501903 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-13 19:49:36.534973 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-13 19:49:36.535090 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-13 19:49:36.535109 | orchestrator | + local max_attempts=60 2025-05-13 19:49:36.535121 | orchestrator | + local name=osism-ansible 2025-05-13 19:49:36.535132 | orchestrator | + local attempt_num=1 2025-05-13 19:49:36.535218 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-13 19:49:36.570441 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-13 19:49:36.570539 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-13 19:49:36.570609 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-13 19:49:36.743421 | orchestrator | ARA in ceph-ansible already disabled. 2025-05-13 19:49:36.956945 | orchestrator | ARA in kolla-ansible already disabled. 2025-05-13 19:49:37.185479 | orchestrator | ARA in osism-ansible already disabled. 2025-05-13 19:49:37.386400 | orchestrator | ARA in osism-kubernetes already disabled. 2025-05-13 19:49:37.392410 | orchestrator | + osism apply gather-facts 2025-05-13 19:49:39.116490 | orchestrator | 2025-05-13 19:49:39 | INFO  | Task e555495a-050a-49ee-a2d0-a4252abac641 (gather-facts) was prepared for execution. 2025-05-13 19:49:39.116788 | orchestrator | 2025-05-13 19:49:39 | INFO  | It takes a moment until task e555495a-050a-49ee-a2d0-a4252abac641 (gather-facts) has been started and output is visible here. 2025-05-13 19:49:55.349193 | orchestrator | 2025-05-13 19:49:55.349298 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-13 19:49:55.349310 | orchestrator | 2025-05-13 19:49:55.349318 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-13 19:49:55.349326 | orchestrator | Tuesday 13 May 2025 19:49:55 +0000 (0:00:01.304) 0:00:01.304 *********** 2025-05-13 19:50:02.959829 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:50:02.959975 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:50:02.960739 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:50:02.964167 | orchestrator | ok: [testbed-manager] 2025-05-13 19:50:02.964477 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:50:02.965256 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:50:02.965798 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:50:02.966486 | orchestrator | 2025-05-13 19:50:02.966744 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-13 19:50:02.967380 | orchestrator | 2025-05-13 19:50:02.967806 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-13 19:50:02.968270 | orchestrator | Tuesday 13 May 2025 19:50:02 +0000 (0:00:07.616) 0:00:08.921 *********** 2025-05-13 19:50:03.136902 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:50:03.232626 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:50:03.323861 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:50:03.432635 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:50:03.513842 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:50:05.813788 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:50:05.816699 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:50:05.818230 | orchestrator | 2025-05-13 19:50:05.819354 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:50:05.819664 | orchestrator | 2025-05-13 19:50:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:50:05.819772 | orchestrator | 2025-05-13 19:50:05 | INFO  | Please wait and do not abort execution. 2025-05-13 19:50:05.821159 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 19:50:05.821769 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 19:50:05.822466 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 19:50:05.822893 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 19:50:05.823523 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 19:50:05.823955 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 19:50:05.824815 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 19:50:05.825132 | orchestrator | 2025-05-13 19:50:05.825720 | orchestrator | 2025-05-13 19:50:05.826256 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 19:50:05.827430 | orchestrator | Tuesday 13 May 2025 19:50:05 +0000 (0:00:02.858) 0:00:11.780 *********** 2025-05-13 19:50:05.828359 | orchestrator | =============================================================================== 2025-05-13 19:50:05.830361 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.62s 2025-05-13 19:50:05.830409 | orchestrator | Gather facts for all hosts ---------------------------------------------- 2.86s 2025-05-13 19:50:06.506428 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-05-13 19:50:06.521258 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-05-13 19:50:06.543543 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-05-13 19:50:06.561552 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-05-13 19:50:06.582063 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-05-13 19:50:06.605717 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-05-13 19:50:06.618094 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-05-13 19:50:06.629998 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-05-13 19:50:06.650204 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-05-13 19:50:06.664004 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-05-13 19:50:06.676300 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-05-13 19:50:06.687831 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-05-13 19:50:06.699106 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-05-13 19:50:06.711101 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-05-13 19:50:06.722327 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-05-13 19:50:06.733171 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-05-13 19:50:06.744102 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-05-13 19:50:06.754991 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-05-13 19:50:06.767910 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-05-13 19:50:06.784756 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-05-13 19:50:06.797014 | orchestrator | + [[ false == \t\r\u\e ]] 2025-05-13 19:50:07.092964 | orchestrator | ok: Runtime: 0:29:28.368370 2025-05-13 19:50:07.202544 | 2025-05-13 19:50:07.202678 | TASK [Deploy services] 2025-05-13 19:50:07.735172 | orchestrator | skipping: Conditional result was False 2025-05-13 19:50:07.756018 | 2025-05-13 19:50:07.756205 | TASK [Deploy in a nutshell] 2025-05-13 19:50:08.413058 | orchestrator | 2025-05-13 19:50:08.413234 | orchestrator | # PULL IMAGES 2025-05-13 19:50:08.413254 | orchestrator | 2025-05-13 19:50:08.413306 | orchestrator | + set -e 2025-05-13 19:50:08.413325 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-13 19:50:08.413346 | orchestrator | ++ export INTERACTIVE=false 2025-05-13 19:50:08.413361 | orchestrator | ++ INTERACTIVE=false 2025-05-13 19:50:08.413435 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-13 19:50:08.413457 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-13 19:50:08.413471 | orchestrator | + source /opt/manager-vars.sh 2025-05-13 19:50:08.413482 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-13 19:50:08.413501 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-13 19:50:08.413513 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-13 19:50:08.413530 | orchestrator | ++ CEPH_VERSION=reef 2025-05-13 19:50:08.413541 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-13 19:50:08.413561 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-13 19:50:08.413573 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-13 19:50:08.413589 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-13 19:50:08.413602 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-13 19:50:08.413620 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-13 19:50:08.413632 | orchestrator | ++ export ARA=false 2025-05-13 19:50:08.413645 | orchestrator | ++ ARA=false 2025-05-13 19:50:08.413657 | orchestrator | ++ export TEMPEST=false 2025-05-13 19:50:08.413670 | orchestrator | ++ TEMPEST=false 2025-05-13 19:50:08.413683 | orchestrator | ++ export IS_ZUUL=true 2025-05-13 19:50:08.413695 | orchestrator | ++ IS_ZUUL=true 2025-05-13 19:50:08.413707 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.173 2025-05-13 19:50:08.413721 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.173 2025-05-13 19:50:08.413733 | orchestrator | ++ export EXTERNAL_API=false 2025-05-13 19:50:08.413746 | orchestrator | ++ EXTERNAL_API=false 2025-05-13 19:50:08.413758 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-13 19:50:08.413770 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-13 19:50:08.413782 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-13 19:50:08.413795 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-13 19:50:08.413808 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-13 19:50:08.413820 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-13 19:50:08.413833 | orchestrator | + echo 2025-05-13 19:50:08.413845 | orchestrator | + echo '# PULL IMAGES' 2025-05-13 19:50:08.413858 | orchestrator | + echo 2025-05-13 19:50:08.413983 | orchestrator | ++ semver latest 7.0.0 2025-05-13 19:50:08.469228 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-13 19:50:08.469306 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-13 19:50:08.469319 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-05-13 19:50:10.101814 | orchestrator | 2025-05-13 19:50:10 | INFO  | Trying to run play pull-images in environment custom 2025-05-13 19:50:10.162566 | orchestrator | 2025-05-13 19:50:10 | INFO  | Task bf88b734-67e7-4ddc-917b-246973fa92d9 (pull-images) was prepared for execution. 2025-05-13 19:50:10.162713 | orchestrator | 2025-05-13 19:50:10 | INFO  | It takes a moment until task bf88b734-67e7-4ddc-917b-246973fa92d9 (pull-images) has been started and output is visible here. 2025-05-13 19:50:12.984879 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2025-05-13 19:50:12.986187 | orchestrator | -vvvv to see details 2025-05-13 19:50:15.200682 | orchestrator | 2025-05-13 19:50:15.202236 | orchestrator | PLAY [Pull images] ************************************************************* 2025-05-13 19:50:15.204052 | orchestrator | 2025-05-13 19:50:15.205303 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-13 19:50:17.590704 | orchestrator | fatal: [testbed-manager]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.5\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.5: Permission denied (publickey).\r\n", "unreachable": true} 2025-05-13 19:50:17.591603 | orchestrator | 2025-05-13 19:50:17.593409 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:50:17.593466 | orchestrator | 2025-05-13 19:50:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:50:17.594485 | orchestrator | 2025-05-13 19:50:17 | INFO  | Please wait and do not abort execution. 2025-05-13 19:50:17.596398 | orchestrator | testbed-manager : ok=0 changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:50:17.597645 | orchestrator | 2025-05-13 19:50:17.735493 | orchestrator | 2025-05-13 19:50:17 | INFO  | Trying to run play pull-images in environment custom 2025-05-13 19:50:17.736048 | orchestrator | 2025-05-13 19:50:17 | INFO  | Task b7eab932-6733-4830-aa02-43ac20e18e29 (pull-images) was prepared for execution. 2025-05-13 19:50:17.736088 | orchestrator | 2025-05-13 19:50:17 | INFO  | It takes a moment until task b7eab932-6733-4830-aa02-43ac20e18e29 (pull-images) has been started and output is visible here. 2025-05-13 19:50:22.746355 | orchestrator | 2025-05-13 19:50:22.748729 | orchestrator | PLAY [Pull images] ************************************************************* 2025-05-13 19:50:22.748844 | orchestrator | 2025-05-13 19:50:22.750573 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-05-13 19:50:22.751671 | orchestrator | Tuesday 13 May 2025 19:50:22 +0000 (0:00:01.175) 0:00:01.175 *********** 2025-05-13 19:51:28.352750 | orchestrator | changed: [testbed-manager] 2025-05-13 19:51:28.352864 | orchestrator | 2025-05-13 19:51:28.352879 | orchestrator | TASK [Pull other images] ******************************************************* 2025-05-13 19:51:28.352894 | orchestrator | Tuesday 13 May 2025 19:51:28 +0000 (0:01:05.603) 0:01:06.779 *********** 2025-05-13 19:52:25.464156 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-05-13 19:52:25.464300 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-05-13 19:52:25.464322 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-05-13 19:52:25.464342 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-05-13 19:52:25.464361 | orchestrator | changed: [testbed-manager] => (item=common) 2025-05-13 19:52:25.464381 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-05-13 19:52:25.464393 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-05-13 19:52:25.464407 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-05-13 19:52:25.464418 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-05-13 19:52:25.464429 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-05-13 19:52:25.464694 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-05-13 19:52:25.465590 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-05-13 19:52:25.469327 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-05-13 19:52:25.469407 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-05-13 19:52:25.469422 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-05-13 19:52:25.469433 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-05-13 19:52:25.470288 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-05-13 19:52:25.470914 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-05-13 19:52:25.470937 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-05-13 19:52:25.471735 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-05-13 19:52:25.472894 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-05-13 19:52:25.473008 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-05-13 19:52:25.473363 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-05-13 19:52:25.474186 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-05-13 19:52:25.475977 | orchestrator | 2025-05-13 19:52:25.476010 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:52:25.476066 | orchestrator | 2025-05-13 19:52:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:52:25.476080 | orchestrator | 2025-05-13 19:52:25 | INFO  | Please wait and do not abort execution. 2025-05-13 19:52:25.476923 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:52:25.476951 | orchestrator | 2025-05-13 19:52:25.477055 | orchestrator | 2025-05-13 19:52:25.477355 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 19:52:25.477646 | orchestrator | Tuesday 13 May 2025 19:52:25 +0000 (0:00:57.112) 0:02:03.891 *********** 2025-05-13 19:52:25.478584 | orchestrator | =============================================================================== 2025-05-13 19:52:25.479581 | orchestrator | Pull keystone image ---------------------------------------------------- 65.60s 2025-05-13 19:52:25.479875 | orchestrator | Pull other images ------------------------------------------------------ 57.11s 2025-05-13 19:52:27.789210 | orchestrator | 2025-05-13 19:52:27 | INFO  | Trying to run play wipe-partitions in environment custom 2025-05-13 19:52:27.850460 | orchestrator | 2025-05-13 19:52:27 | INFO  | Task ab46a97f-15dc-4ede-8f7e-40862bfa5002 (wipe-partitions) was prepared for execution. 2025-05-13 19:52:27.850551 | orchestrator | 2025-05-13 19:52:27 | INFO  | It takes a moment until task ab46a97f-15dc-4ede-8f7e-40862bfa5002 (wipe-partitions) has been started and output is visible here. 2025-05-13 19:52:33.860115 | orchestrator | 2025-05-13 19:52:33.860853 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-05-13 19:52:33.861067 | orchestrator | 2025-05-13 19:52:33.861292 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-05-13 19:52:33.861602 | orchestrator | Tuesday 13 May 2025 19:52:33 +0000 (0:00:01.844) 0:00:01.844 *********** 2025-05-13 19:52:35.856233 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:52:35.856393 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:52:35.856883 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:52:35.861814 | orchestrator | 2025-05-13 19:52:35.862068 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-05-13 19:52:35.862365 | orchestrator | Tuesday 13 May 2025 19:52:35 +0000 (0:00:01.995) 0:00:03.839 *********** 2025-05-13 19:52:36.029260 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:52:37.212949 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:52:37.213069 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:52:37.213083 | orchestrator | 2025-05-13 19:52:37.213112 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-05-13 19:52:37.213136 | orchestrator | Tuesday 13 May 2025 19:52:37 +0000 (0:00:01.351) 0:00:05.190 *********** 2025-05-13 19:52:38.927393 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:52:38.928309 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:52:38.929013 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:52:38.931093 | orchestrator | 2025-05-13 19:52:38.931516 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-05-13 19:52:38.932266 | orchestrator | Tuesday 13 May 2025 19:52:38 +0000 (0:00:01.719) 0:00:06.910 *********** 2025-05-13 19:52:39.112992 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:52:40.169419 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:52:40.170058 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:52:40.174107 | orchestrator | 2025-05-13 19:52:40.174145 | orchestrator | TASK [Check device availability] *********************************************** 2025-05-13 19:52:40.174160 | orchestrator | Tuesday 13 May 2025 19:52:40 +0000 (0:00:01.241) 0:00:08.151 *********** 2025-05-13 19:52:42.380109 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-13 19:52:42.382994 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-13 19:52:42.383075 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-13 19:52:42.385612 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-13 19:52:42.387110 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-13 19:52:42.388333 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-13 19:52:42.389277 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-13 19:52:42.390300 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-13 19:52:42.391181 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-13 19:52:42.392467 | orchestrator | 2025-05-13 19:52:42.393380 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-05-13 19:52:42.394054 | orchestrator | Tuesday 13 May 2025 19:52:42 +0000 (0:00:02.211) 0:00:10.363 *********** 2025-05-13 19:52:44.635680 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-05-13 19:52:44.635976 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-05-13 19:52:44.636461 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-05-13 19:52:44.637542 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-05-13 19:52:44.638826 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-05-13 19:52:44.640866 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-05-13 19:52:44.641973 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-05-13 19:52:44.643779 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-05-13 19:52:44.645087 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-05-13 19:52:44.646090 | orchestrator | 2025-05-13 19:52:44.647472 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-05-13 19:52:44.647998 | orchestrator | Tuesday 13 May 2025 19:52:44 +0000 (0:00:02.252) 0:00:12.616 *********** 2025-05-13 19:52:47.651002 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-13 19:52:47.654430 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-13 19:52:47.655974 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-13 19:52:47.660539 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-13 19:52:47.664871 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-13 19:52:47.664913 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-13 19:52:47.664924 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-13 19:52:47.664936 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-13 19:52:47.664970 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-13 19:52:47.664983 | orchestrator | 2025-05-13 19:52:47.664996 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-05-13 19:52:47.666158 | orchestrator | Tuesday 13 May 2025 19:52:47 +0000 (0:00:03.010) 0:00:15.626 *********** 2025-05-13 19:52:49.264726 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:52:49.264866 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:52:49.264894 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:52:49.266950 | orchestrator | 2025-05-13 19:52:49.267012 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-05-13 19:52:49.268173 | orchestrator | Tuesday 13 May 2025 19:52:49 +0000 (0:00:01.618) 0:00:17.244 *********** 2025-05-13 19:52:51.455039 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:52:51.455217 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:52:51.456149 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:52:51.457096 | orchestrator | 2025-05-13 19:52:51.458009 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:52:51.458953 | orchestrator | 2025-05-13 19:52:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:52:51.459351 | orchestrator | 2025-05-13 19:52:51 | INFO  | Please wait and do not abort execution. 2025-05-13 19:52:51.460766 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:52:51.461415 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:52:51.462101 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:52:51.462896 | orchestrator | 2025-05-13 19:52:51.463446 | orchestrator | 2025-05-13 19:52:51.464203 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 19:52:51.465059 | orchestrator | Tuesday 13 May 2025 19:52:51 +0000 (0:00:02.193) 0:00:19.438 *********** 2025-05-13 19:52:51.466333 | orchestrator | =============================================================================== 2025-05-13 19:52:51.467259 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 3.01s 2025-05-13 19:52:51.467792 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 2.25s 2025-05-13 19:52:51.468994 | orchestrator | Check device availability ----------------------------------------------- 2.21s 2025-05-13 19:52:51.469686 | orchestrator | Request device events from the kernel ----------------------------------- 2.19s 2025-05-13 19:52:51.470823 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 2.00s 2025-05-13 19:52:51.471393 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 1.72s 2025-05-13 19:52:51.471962 | orchestrator | Reload udev rules ------------------------------------------------------- 1.62s 2025-05-13 19:52:51.472562 | orchestrator | Remove all rook related logical devices --------------------------------- 1.35s 2025-05-13 19:52:51.473107 | orchestrator | Remove all ceph related logical devices --------------------------------- 1.24s 2025-05-13 19:52:53.867009 | orchestrator | 2025-05-13 19:52:53 | INFO  | Task e5c26e4f-0e72-4d0a-a615-12aa79e2cc0f (facts) was prepared for execution. 2025-05-13 19:52:53.867147 | orchestrator | 2025-05-13 19:52:53 | INFO  | It takes a moment until task e5c26e4f-0e72-4d0a-a615-12aa79e2cc0f (facts) has been started and output is visible here. 2025-05-13 19:52:59.712265 | orchestrator | 2025-05-13 19:52:59.712392 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-13 19:52:59.712466 | orchestrator | 2025-05-13 19:52:59.714297 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-13 19:52:59.714337 | orchestrator | Tuesday 13 May 2025 19:52:59 +0000 (0:00:01.805) 0:00:01.806 *********** 2025-05-13 19:53:03.219312 | orchestrator | ok: [testbed-manager] 2025-05-13 19:53:03.219423 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:53:03.219500 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:53:03.220280 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:53:03.220663 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:53:03.223780 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:53:03.224957 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:53:03.225178 | orchestrator | 2025-05-13 19:53:03.225551 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-13 19:53:03.228654 | orchestrator | Tuesday 13 May 2025 19:53:03 +0000 (0:00:03.502) 0:00:05.308 *********** 2025-05-13 19:53:03.406777 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:53:03.564037 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:53:03.672368 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:53:03.782474 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:53:03.876661 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:05.595120 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:05.595286 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:53:05.595375 | orchestrator | 2025-05-13 19:53:05.595646 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-13 19:53:05.596321 | orchestrator | 2025-05-13 19:53:05.597230 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-13 19:53:05.597254 | orchestrator | Tuesday 13 May 2025 19:53:05 +0000 (0:00:02.386) 0:00:07.695 *********** 2025-05-13 19:53:12.560159 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:53:12.567019 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:53:12.570406 | orchestrator | ok: [testbed-manager] 2025-05-13 19:53:12.570797 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:53:12.571799 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:53:12.573163 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:53:12.573731 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:53:12.574898 | orchestrator | 2025-05-13 19:53:12.576040 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-13 19:53:12.578268 | orchestrator | 2025-05-13 19:53:12.580943 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-13 19:53:12.581005 | orchestrator | Tuesday 13 May 2025 19:53:12 +0000 (0:00:06.962) 0:00:14.657 *********** 2025-05-13 19:53:12.785406 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:53:12.931370 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:53:13.085047 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:53:13.265151 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:53:13.405905 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:16.089396 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:16.089664 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:53:16.089772 | orchestrator | 2025-05-13 19:53:16.092169 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:53:16.092199 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:53:16.092244 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:53:16.092256 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:53:16.092288 | orchestrator | 2025-05-13 19:53:16 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:53:16.092304 | orchestrator | 2025-05-13 19:53:16 | INFO  | Please wait and do not abort execution. 2025-05-13 19:53:16.092362 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:53:16.093014 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:53:16.093036 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:53:16.093823 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:53:16.093846 | orchestrator | 2025-05-13 19:53:16.094491 | orchestrator | 2025-05-13 19:53:16.096457 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 19:53:16.096630 | orchestrator | Tuesday 13 May 2025 19:53:16 +0000 (0:00:03.532) 0:00:18.189 *********** 2025-05-13 19:53:16.096648 | orchestrator | =============================================================================== 2025-05-13 19:53:16.096661 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.96s 2025-05-13 19:53:16.096672 | orchestrator | Gather facts for all hosts ---------------------------------------------- 3.53s 2025-05-13 19:53:16.096683 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 3.50s 2025-05-13 19:53:16.096765 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 2.39s 2025-05-13 19:53:18.659698 | orchestrator | 2025-05-13 19:53:18 | INFO  | Task 53cedbaf-4621-4d46-8e92-2727eca9ef47 (ceph-configure-lvm-volumes) was prepared for execution. 2025-05-13 19:53:18.659817 | orchestrator | 2025-05-13 19:53:18 | INFO  | It takes a moment until task 53cedbaf-4621-4d46-8e92-2727eca9ef47 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-05-13 19:53:23.269256 | orchestrator | 2025-05-13 19:53:23.270179 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-13 19:53:23.270958 | orchestrator | 2025-05-13 19:53:23.271228 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-13 19:53:23.271834 | orchestrator | Tuesday 13 May 2025 19:53:23 +0000 (0:00:00.400) 0:00:00.400 *********** 2025-05-13 19:53:23.502964 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-13 19:53:23.503171 | orchestrator | 2025-05-13 19:53:23.503192 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-13 19:53:23.503617 | orchestrator | Tuesday 13 May 2025 19:53:23 +0000 (0:00:00.232) 0:00:00.632 *********** 2025-05-13 19:53:23.740234 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:53:23.740347 | orchestrator | 2025-05-13 19:53:23.740362 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:23.740375 | orchestrator | Tuesday 13 May 2025 19:53:23 +0000 (0:00:00.236) 0:00:00.869 *********** 2025-05-13 19:53:24.108798 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-13 19:53:24.108901 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-13 19:53:24.109622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-13 19:53:24.109744 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-13 19:53:24.111388 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-13 19:53:24.112894 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-13 19:53:24.113027 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-13 19:53:24.113681 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-13 19:53:24.113919 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-13 19:53:24.114596 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-13 19:53:24.115814 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-13 19:53:24.117793 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-13 19:53:24.118266 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-13 19:53:24.118415 | orchestrator | 2025-05-13 19:53:24.118819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:24.119069 | orchestrator | Tuesday 13 May 2025 19:53:24 +0000 (0:00:00.371) 0:00:01.240 *********** 2025-05-13 19:53:24.596416 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:24.596590 | orchestrator | 2025-05-13 19:53:24.597716 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:24.597816 | orchestrator | Tuesday 13 May 2025 19:53:24 +0000 (0:00:00.488) 0:00:01.729 *********** 2025-05-13 19:53:24.756980 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:24.757106 | orchestrator | 2025-05-13 19:53:24.757123 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:24.757137 | orchestrator | Tuesday 13 May 2025 19:53:24 +0000 (0:00:00.154) 0:00:01.884 *********** 2025-05-13 19:53:24.917389 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:24.918447 | orchestrator | 2025-05-13 19:53:24.918567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:24.919684 | orchestrator | Tuesday 13 May 2025 19:53:24 +0000 (0:00:00.166) 0:00:02.050 *********** 2025-05-13 19:53:25.080654 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:25.080769 | orchestrator | 2025-05-13 19:53:25.080880 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:25.080937 | orchestrator | Tuesday 13 May 2025 19:53:25 +0000 (0:00:00.160) 0:00:02.211 *********** 2025-05-13 19:53:25.236007 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:25.236123 | orchestrator | 2025-05-13 19:53:25.236430 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:25.237998 | orchestrator | Tuesday 13 May 2025 19:53:25 +0000 (0:00:00.159) 0:00:02.370 *********** 2025-05-13 19:53:25.392150 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:25.392436 | orchestrator | 2025-05-13 19:53:25.392825 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:25.393230 | orchestrator | Tuesday 13 May 2025 19:53:25 +0000 (0:00:00.155) 0:00:02.526 *********** 2025-05-13 19:53:25.550282 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:25.550429 | orchestrator | 2025-05-13 19:53:25.551908 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:25.552312 | orchestrator | Tuesday 13 May 2025 19:53:25 +0000 (0:00:00.157) 0:00:02.684 *********** 2025-05-13 19:53:25.717552 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:25.717639 | orchestrator | 2025-05-13 19:53:25.717869 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:25.718071 | orchestrator | Tuesday 13 May 2025 19:53:25 +0000 (0:00:00.167) 0:00:02.851 *********** 2025-05-13 19:53:26.087009 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b) 2025-05-13 19:53:26.087133 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b) 2025-05-13 19:53:26.087810 | orchestrator | 2025-05-13 19:53:26.087967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:26.088302 | orchestrator | Tuesday 13 May 2025 19:53:26 +0000 (0:00:00.367) 0:00:03.219 *********** 2025-05-13 19:53:26.456470 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_34a01356-b2ad-4692-b4fa-0e371ae7ecbd) 2025-05-13 19:53:26.456719 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_34a01356-b2ad-4692-b4fa-0e371ae7ecbd) 2025-05-13 19:53:26.456742 | orchestrator | 2025-05-13 19:53:26.457175 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:26.457644 | orchestrator | Tuesday 13 May 2025 19:53:26 +0000 (0:00:00.364) 0:00:03.584 *********** 2025-05-13 19:53:26.959824 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ca00bcd5-8e8a-4b90-8497-af6d74b86161) 2025-05-13 19:53:26.961296 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ca00bcd5-8e8a-4b90-8497-af6d74b86161) 2025-05-13 19:53:26.963206 | orchestrator | 2025-05-13 19:53:26.963517 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:26.964216 | orchestrator | Tuesday 13 May 2025 19:53:26 +0000 (0:00:00.508) 0:00:04.093 *********** 2025-05-13 19:53:27.487303 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_04d2f464-e449-42d7-9ceb-0224b6b42ef4) 2025-05-13 19:53:27.491986 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_04d2f464-e449-42d7-9ceb-0224b6b42ef4) 2025-05-13 19:53:27.492039 | orchestrator | 2025-05-13 19:53:27.492054 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:27.492066 | orchestrator | Tuesday 13 May 2025 19:53:27 +0000 (0:00:00.526) 0:00:04.619 *********** 2025-05-13 19:53:28.015757 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-13 19:53:28.018114 | orchestrator | 2025-05-13 19:53:28.018761 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:28.019123 | orchestrator | Tuesday 13 May 2025 19:53:28 +0000 (0:00:00.530) 0:00:05.149 *********** 2025-05-13 19:53:28.349780 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-13 19:53:28.351018 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-13 19:53:28.351158 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-13 19:53:28.352101 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-13 19:53:28.353094 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-13 19:53:28.354621 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-13 19:53:28.355427 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-13 19:53:28.356695 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-13 19:53:28.357902 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-13 19:53:28.358970 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-13 19:53:28.359735 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-13 19:53:28.360795 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-13 19:53:28.361166 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-13 19:53:28.361824 | orchestrator | 2025-05-13 19:53:28.362251 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:28.362958 | orchestrator | Tuesday 13 May 2025 19:53:28 +0000 (0:00:00.333) 0:00:05.482 *********** 2025-05-13 19:53:28.536944 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:28.539762 | orchestrator | 2025-05-13 19:53:28.539920 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:28.541131 | orchestrator | Tuesday 13 May 2025 19:53:28 +0000 (0:00:00.186) 0:00:05.669 *********** 2025-05-13 19:53:28.717585 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:28.718579 | orchestrator | 2025-05-13 19:53:28.721892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:28.722755 | orchestrator | Tuesday 13 May 2025 19:53:28 +0000 (0:00:00.181) 0:00:05.850 *********** 2025-05-13 19:53:28.908599 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:28.908711 | orchestrator | 2025-05-13 19:53:28.910060 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:28.910088 | orchestrator | Tuesday 13 May 2025 19:53:28 +0000 (0:00:00.191) 0:00:06.042 *********** 2025-05-13 19:53:29.100389 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:29.100635 | orchestrator | 2025-05-13 19:53:29.100913 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:29.102711 | orchestrator | Tuesday 13 May 2025 19:53:29 +0000 (0:00:00.191) 0:00:06.233 *********** 2025-05-13 19:53:29.319900 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:29.320226 | orchestrator | 2025-05-13 19:53:29.322541 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:29.326879 | orchestrator | Tuesday 13 May 2025 19:53:29 +0000 (0:00:00.218) 0:00:06.451 *********** 2025-05-13 19:53:29.507266 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:29.510170 | orchestrator | 2025-05-13 19:53:29.514768 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:29.514807 | orchestrator | Tuesday 13 May 2025 19:53:29 +0000 (0:00:00.185) 0:00:06.637 *********** 2025-05-13 19:53:29.682850 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:29.684502 | orchestrator | 2025-05-13 19:53:29.688278 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:29.688368 | orchestrator | Tuesday 13 May 2025 19:53:29 +0000 (0:00:00.177) 0:00:06.815 *********** 2025-05-13 19:53:29.852937 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:29.853978 | orchestrator | 2025-05-13 19:53:29.855616 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:29.856005 | orchestrator | Tuesday 13 May 2025 19:53:29 +0000 (0:00:00.170) 0:00:06.985 *********** 2025-05-13 19:53:30.648324 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-13 19:53:30.648457 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-13 19:53:30.648522 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-13 19:53:30.649076 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-13 19:53:30.649349 | orchestrator | 2025-05-13 19:53:30.649792 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:30.650636 | orchestrator | Tuesday 13 May 2025 19:53:30 +0000 (0:00:00.794) 0:00:07.780 *********** 2025-05-13 19:53:30.821333 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:30.821531 | orchestrator | 2025-05-13 19:53:30.821617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:30.822728 | orchestrator | Tuesday 13 May 2025 19:53:30 +0000 (0:00:00.174) 0:00:07.954 *********** 2025-05-13 19:53:30.993542 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:30.993734 | orchestrator | 2025-05-13 19:53:30.993752 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:30.994095 | orchestrator | Tuesday 13 May 2025 19:53:30 +0000 (0:00:00.173) 0:00:08.128 *********** 2025-05-13 19:53:31.156949 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:31.158843 | orchestrator | 2025-05-13 19:53:31.158874 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:31.162670 | orchestrator | Tuesday 13 May 2025 19:53:31 +0000 (0:00:00.162) 0:00:08.290 *********** 2025-05-13 19:53:31.341178 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:31.343880 | orchestrator | 2025-05-13 19:53:31.346125 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-13 19:53:31.347885 | orchestrator | Tuesday 13 May 2025 19:53:31 +0000 (0:00:00.182) 0:00:08.473 *********** 2025-05-13 19:53:31.500552 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-05-13 19:53:31.501680 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-05-13 19:53:31.502735 | orchestrator | 2025-05-13 19:53:31.503788 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-13 19:53:31.507658 | orchestrator | Tuesday 13 May 2025 19:53:31 +0000 (0:00:00.158) 0:00:08.632 *********** 2025-05-13 19:53:31.635287 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:31.636954 | orchestrator | 2025-05-13 19:53:31.639749 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-13 19:53:31.639805 | orchestrator | Tuesday 13 May 2025 19:53:31 +0000 (0:00:00.136) 0:00:08.768 *********** 2025-05-13 19:53:31.778688 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:31.779982 | orchestrator | 2025-05-13 19:53:31.780502 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-13 19:53:31.782247 | orchestrator | Tuesday 13 May 2025 19:53:31 +0000 (0:00:00.143) 0:00:08.912 *********** 2025-05-13 19:53:31.908555 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:31.908660 | orchestrator | 2025-05-13 19:53:31.908675 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-13 19:53:31.909553 | orchestrator | Tuesday 13 May 2025 19:53:31 +0000 (0:00:00.127) 0:00:09.039 *********** 2025-05-13 19:53:32.043336 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:53:32.044978 | orchestrator | 2025-05-13 19:53:32.049374 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-13 19:53:32.049982 | orchestrator | Tuesday 13 May 2025 19:53:32 +0000 (0:00:00.135) 0:00:09.175 *********** 2025-05-13 19:53:32.199412 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'eb14b8c1-d757-5b78-a398-3e433d34ee3e'}}) 2025-05-13 19:53:32.201788 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '55d6de5b-857a-5090-90bd-6b26b006e6c2'}}) 2025-05-13 19:53:32.204003 | orchestrator | 2025-05-13 19:53:32.208298 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-13 19:53:32.209655 | orchestrator | Tuesday 13 May 2025 19:53:32 +0000 (0:00:00.156) 0:00:09.332 *********** 2025-05-13 19:53:32.353785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'eb14b8c1-d757-5b78-a398-3e433d34ee3e'}})  2025-05-13 19:53:32.355992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '55d6de5b-857a-5090-90bd-6b26b006e6c2'}})  2025-05-13 19:53:32.357200 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:32.359447 | orchestrator | 2025-05-13 19:53:32.361270 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-13 19:53:32.362979 | orchestrator | Tuesday 13 May 2025 19:53:32 +0000 (0:00:00.154) 0:00:09.486 *********** 2025-05-13 19:53:32.700665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'eb14b8c1-d757-5b78-a398-3e433d34ee3e'}})  2025-05-13 19:53:32.700839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '55d6de5b-857a-5090-90bd-6b26b006e6c2'}})  2025-05-13 19:53:32.705287 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:32.707151 | orchestrator | 2025-05-13 19:53:32.707604 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-13 19:53:32.708857 | orchestrator | Tuesday 13 May 2025 19:53:32 +0000 (0:00:00.344) 0:00:09.830 *********** 2025-05-13 19:53:32.849884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'eb14b8c1-d757-5b78-a398-3e433d34ee3e'}})  2025-05-13 19:53:32.852076 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '55d6de5b-857a-5090-90bd-6b26b006e6c2'}})  2025-05-13 19:53:32.853050 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:32.856542 | orchestrator | 2025-05-13 19:53:32.857138 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-13 19:53:32.858529 | orchestrator | Tuesday 13 May 2025 19:53:32 +0000 (0:00:00.148) 0:00:09.979 *********** 2025-05-13 19:53:32.988520 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:53:32.988623 | orchestrator | 2025-05-13 19:53:32.988638 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-13 19:53:32.989113 | orchestrator | Tuesday 13 May 2025 19:53:32 +0000 (0:00:00.140) 0:00:10.119 *********** 2025-05-13 19:53:33.135329 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:53:33.136821 | orchestrator | 2025-05-13 19:53:33.137143 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-13 19:53:33.137838 | orchestrator | Tuesday 13 May 2025 19:53:33 +0000 (0:00:00.147) 0:00:10.267 *********** 2025-05-13 19:53:33.263942 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:33.264888 | orchestrator | 2025-05-13 19:53:33.267887 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-13 19:53:33.267999 | orchestrator | Tuesday 13 May 2025 19:53:33 +0000 (0:00:00.126) 0:00:10.394 *********** 2025-05-13 19:53:33.382318 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:33.384323 | orchestrator | 2025-05-13 19:53:33.385037 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-13 19:53:33.387418 | orchestrator | Tuesday 13 May 2025 19:53:33 +0000 (0:00:00.120) 0:00:10.514 *********** 2025-05-13 19:53:33.523004 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:33.523680 | orchestrator | 2025-05-13 19:53:33.524699 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-13 19:53:33.525982 | orchestrator | Tuesday 13 May 2025 19:53:33 +0000 (0:00:00.137) 0:00:10.652 *********** 2025-05-13 19:53:33.647574 | orchestrator | ok: [testbed-node-3] => { 2025-05-13 19:53:33.648248 | orchestrator |  "ceph_osd_devices": { 2025-05-13 19:53:33.651521 | orchestrator |  "sdb": { 2025-05-13 19:53:33.652443 | orchestrator |  "osd_lvm_uuid": "eb14b8c1-d757-5b78-a398-3e433d34ee3e" 2025-05-13 19:53:33.655857 | orchestrator |  }, 2025-05-13 19:53:33.656113 | orchestrator |  "sdc": { 2025-05-13 19:53:33.656492 | orchestrator |  "osd_lvm_uuid": "55d6de5b-857a-5090-90bd-6b26b006e6c2" 2025-05-13 19:53:33.656717 | orchestrator |  } 2025-05-13 19:53:33.657667 | orchestrator |  } 2025-05-13 19:53:33.658529 | orchestrator | } 2025-05-13 19:53:33.658982 | orchestrator | 2025-05-13 19:53:33.659571 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-13 19:53:33.660233 | orchestrator | Tuesday 13 May 2025 19:53:33 +0000 (0:00:00.128) 0:00:10.780 *********** 2025-05-13 19:53:33.771573 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:33.771826 | orchestrator | 2025-05-13 19:53:33.772966 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-13 19:53:33.773093 | orchestrator | Tuesday 13 May 2025 19:53:33 +0000 (0:00:00.123) 0:00:10.903 *********** 2025-05-13 19:53:33.865111 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:33.865235 | orchestrator | 2025-05-13 19:53:33.865339 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-13 19:53:33.867851 | orchestrator | Tuesday 13 May 2025 19:53:33 +0000 (0:00:00.094) 0:00:10.998 *********** 2025-05-13 19:53:33.964936 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:53:33.966003 | orchestrator | 2025-05-13 19:53:33.967423 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-13 19:53:33.967908 | orchestrator | Tuesday 13 May 2025 19:53:33 +0000 (0:00:00.100) 0:00:11.099 *********** 2025-05-13 19:53:34.169987 | orchestrator | changed: [testbed-node-3] => { 2025-05-13 19:53:34.171044 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-13 19:53:34.172747 | orchestrator |  "ceph_osd_devices": { 2025-05-13 19:53:34.176055 | orchestrator |  "sdb": { 2025-05-13 19:53:34.176800 | orchestrator |  "osd_lvm_uuid": "eb14b8c1-d757-5b78-a398-3e433d34ee3e" 2025-05-13 19:53:34.177124 | orchestrator |  }, 2025-05-13 19:53:34.177791 | orchestrator |  "sdc": { 2025-05-13 19:53:34.179579 | orchestrator |  "osd_lvm_uuid": "55d6de5b-857a-5090-90bd-6b26b006e6c2" 2025-05-13 19:53:34.180100 | orchestrator |  } 2025-05-13 19:53:34.180811 | orchestrator |  }, 2025-05-13 19:53:34.181418 | orchestrator |  "lvm_volumes": [ 2025-05-13 19:53:34.181756 | orchestrator |  { 2025-05-13 19:53:34.182396 | orchestrator |  "data": "osd-block-eb14b8c1-d757-5b78-a398-3e433d34ee3e", 2025-05-13 19:53:34.182753 | orchestrator |  "data_vg": "ceph-eb14b8c1-d757-5b78-a398-3e433d34ee3e" 2025-05-13 19:53:34.183191 | orchestrator |  }, 2025-05-13 19:53:34.183529 | orchestrator |  { 2025-05-13 19:53:34.184042 | orchestrator |  "data": "osd-block-55d6de5b-857a-5090-90bd-6b26b006e6c2", 2025-05-13 19:53:34.184826 | orchestrator |  "data_vg": "ceph-55d6de5b-857a-5090-90bd-6b26b006e6c2" 2025-05-13 19:53:34.184990 | orchestrator |  } 2025-05-13 19:53:34.185150 | orchestrator |  ] 2025-05-13 19:53:34.185721 | orchestrator |  } 2025-05-13 19:53:34.186812 | orchestrator | } 2025-05-13 19:53:34.187159 | orchestrator | 2025-05-13 19:53:34.187618 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-13 19:53:34.188500 | orchestrator | Tuesday 13 May 2025 19:53:34 +0000 (0:00:00.203) 0:00:11.302 *********** 2025-05-13 19:53:36.053138 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-13 19:53:36.055070 | orchestrator | 2025-05-13 19:53:36.056311 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-13 19:53:36.057051 | orchestrator | 2025-05-13 19:53:36.057792 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-13 19:53:36.058243 | orchestrator | Tuesday 13 May 2025 19:53:36 +0000 (0:00:01.882) 0:00:13.185 *********** 2025-05-13 19:53:36.276070 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-13 19:53:36.276666 | orchestrator | 2025-05-13 19:53:36.277637 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-13 19:53:36.279093 | orchestrator | Tuesday 13 May 2025 19:53:36 +0000 (0:00:00.224) 0:00:13.409 *********** 2025-05-13 19:53:36.553622 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:53:36.557101 | orchestrator | 2025-05-13 19:53:36.558507 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:36.559402 | orchestrator | Tuesday 13 May 2025 19:53:36 +0000 (0:00:00.272) 0:00:13.681 *********** 2025-05-13 19:53:36.915239 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-13 19:53:36.916690 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-13 19:53:36.916724 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-13 19:53:36.918092 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-13 19:53:36.918389 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-13 19:53:36.918998 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-13 19:53:36.919020 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-13 19:53:36.919204 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-13 19:53:36.919554 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-13 19:53:36.920212 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-13 19:53:36.921540 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-13 19:53:36.921876 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-13 19:53:36.922292 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-13 19:53:36.923081 | orchestrator | 2025-05-13 19:53:36.923581 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:36.925193 | orchestrator | Tuesday 13 May 2025 19:53:36 +0000 (0:00:00.364) 0:00:14.046 *********** 2025-05-13 19:53:37.097497 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:37.097735 | orchestrator | 2025-05-13 19:53:37.101825 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:37.102617 | orchestrator | Tuesday 13 May 2025 19:53:37 +0000 (0:00:00.184) 0:00:14.230 *********** 2025-05-13 19:53:37.286179 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:37.286856 | orchestrator | 2025-05-13 19:53:37.287949 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:37.288816 | orchestrator | Tuesday 13 May 2025 19:53:37 +0000 (0:00:00.188) 0:00:14.419 *********** 2025-05-13 19:53:37.468266 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:37.469559 | orchestrator | 2025-05-13 19:53:37.470396 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:37.471415 | orchestrator | Tuesday 13 May 2025 19:53:37 +0000 (0:00:00.181) 0:00:14.601 *********** 2025-05-13 19:53:37.656367 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:37.656532 | orchestrator | 2025-05-13 19:53:37.656560 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:37.660641 | orchestrator | Tuesday 13 May 2025 19:53:37 +0000 (0:00:00.184) 0:00:14.785 *********** 2025-05-13 19:53:38.208499 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:38.208772 | orchestrator | 2025-05-13 19:53:38.209550 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:38.210554 | orchestrator | Tuesday 13 May 2025 19:53:38 +0000 (0:00:00.555) 0:00:15.340 *********** 2025-05-13 19:53:38.399769 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:38.400733 | orchestrator | 2025-05-13 19:53:38.401743 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:38.402491 | orchestrator | Tuesday 13 May 2025 19:53:38 +0000 (0:00:00.191) 0:00:15.532 *********** 2025-05-13 19:53:38.604379 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:38.604605 | orchestrator | 2025-05-13 19:53:38.605547 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:38.607844 | orchestrator | Tuesday 13 May 2025 19:53:38 +0000 (0:00:00.202) 0:00:15.735 *********** 2025-05-13 19:53:38.792046 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:38.793376 | orchestrator | 2025-05-13 19:53:38.795289 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:38.796661 | orchestrator | Tuesday 13 May 2025 19:53:38 +0000 (0:00:00.188) 0:00:15.924 *********** 2025-05-13 19:53:39.206869 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2) 2025-05-13 19:53:39.207130 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2) 2025-05-13 19:53:39.209368 | orchestrator | 2025-05-13 19:53:39.210126 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:39.211056 | orchestrator | Tuesday 13 May 2025 19:53:39 +0000 (0:00:00.414) 0:00:16.338 *********** 2025-05-13 19:53:39.633978 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e87b71fc-701a-46cb-bbd9-3f15f37c3043) 2025-05-13 19:53:39.634984 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e87b71fc-701a-46cb-bbd9-3f15f37c3043) 2025-05-13 19:53:39.636367 | orchestrator | 2025-05-13 19:53:39.637828 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:39.638603 | orchestrator | Tuesday 13 May 2025 19:53:39 +0000 (0:00:00.427) 0:00:16.765 *********** 2025-05-13 19:53:40.070523 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_97094a75-4993-40db-897e-adadcd017b36) 2025-05-13 19:53:40.071964 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_97094a75-4993-40db-897e-adadcd017b36) 2025-05-13 19:53:40.074568 | orchestrator | 2025-05-13 19:53:40.074613 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:40.074932 | orchestrator | Tuesday 13 May 2025 19:53:40 +0000 (0:00:00.436) 0:00:17.201 *********** 2025-05-13 19:53:40.571534 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9d4a667e-1daa-4ea2-845b-5122e74908eb) 2025-05-13 19:53:40.571653 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9d4a667e-1daa-4ea2-845b-5122e74908eb) 2025-05-13 19:53:40.573050 | orchestrator | 2025-05-13 19:53:40.573134 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:40.573899 | orchestrator | Tuesday 13 May 2025 19:53:40 +0000 (0:00:00.497) 0:00:17.699 *********** 2025-05-13 19:53:40.903777 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-13 19:53:40.904506 | orchestrator | 2025-05-13 19:53:40.905252 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:40.906080 | orchestrator | Tuesday 13 May 2025 19:53:40 +0000 (0:00:00.335) 0:00:18.035 *********** 2025-05-13 19:53:41.277866 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-13 19:53:41.279314 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-13 19:53:41.280212 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-13 19:53:41.281419 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-13 19:53:41.285596 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-13 19:53:41.286006 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-13 19:53:41.287208 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-13 19:53:41.288046 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-13 19:53:41.288665 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-13 19:53:41.289211 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-13 19:53:41.289797 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-13 19:53:41.290224 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-13 19:53:41.290965 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-13 19:53:41.291416 | orchestrator | 2025-05-13 19:53:41.292024 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:41.292574 | orchestrator | Tuesday 13 May 2025 19:53:41 +0000 (0:00:00.375) 0:00:18.410 *********** 2025-05-13 19:53:41.496126 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:41.496232 | orchestrator | 2025-05-13 19:53:41.496248 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:41.496533 | orchestrator | Tuesday 13 May 2025 19:53:41 +0000 (0:00:00.215) 0:00:18.626 *********** 2025-05-13 19:53:42.156241 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:42.162996 | orchestrator | 2025-05-13 19:53:42.165816 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:42.165843 | orchestrator | Tuesday 13 May 2025 19:53:42 +0000 (0:00:00.659) 0:00:19.285 *********** 2025-05-13 19:53:42.360518 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:42.363121 | orchestrator | 2025-05-13 19:53:42.364744 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:42.366072 | orchestrator | Tuesday 13 May 2025 19:53:42 +0000 (0:00:00.204) 0:00:19.490 *********** 2025-05-13 19:53:42.559329 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:42.561946 | orchestrator | 2025-05-13 19:53:42.562282 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:42.563330 | orchestrator | Tuesday 13 May 2025 19:53:42 +0000 (0:00:00.200) 0:00:19.691 *********** 2025-05-13 19:53:42.744522 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:42.746012 | orchestrator | 2025-05-13 19:53:42.748988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:42.749982 | orchestrator | Tuesday 13 May 2025 19:53:42 +0000 (0:00:00.182) 0:00:19.874 *********** 2025-05-13 19:53:42.960951 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:42.963032 | orchestrator | 2025-05-13 19:53:42.965903 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:42.966723 | orchestrator | Tuesday 13 May 2025 19:53:42 +0000 (0:00:00.218) 0:00:20.092 *********** 2025-05-13 19:53:43.165205 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:43.165309 | orchestrator | 2025-05-13 19:53:43.165324 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:43.166139 | orchestrator | Tuesday 13 May 2025 19:53:43 +0000 (0:00:00.203) 0:00:20.296 *********** 2025-05-13 19:53:43.369288 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:43.371180 | orchestrator | 2025-05-13 19:53:43.378917 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:43.379000 | orchestrator | Tuesday 13 May 2025 19:53:43 +0000 (0:00:00.204) 0:00:20.500 *********** 2025-05-13 19:53:44.000163 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-13 19:53:44.000350 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-13 19:53:44.003020 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-13 19:53:44.003491 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-13 19:53:44.004219 | orchestrator | 2025-05-13 19:53:44.004386 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:44.005301 | orchestrator | Tuesday 13 May 2025 19:53:43 +0000 (0:00:00.631) 0:00:21.132 *********** 2025-05-13 19:53:44.192888 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:44.192995 | orchestrator | 2025-05-13 19:53:44.193011 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:44.193087 | orchestrator | Tuesday 13 May 2025 19:53:44 +0000 (0:00:00.192) 0:00:21.324 *********** 2025-05-13 19:53:44.389367 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:44.389560 | orchestrator | 2025-05-13 19:53:44.389943 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:44.390243 | orchestrator | Tuesday 13 May 2025 19:53:44 +0000 (0:00:00.196) 0:00:21.521 *********** 2025-05-13 19:53:44.593094 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:44.593207 | orchestrator | 2025-05-13 19:53:44.593223 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:44.593236 | orchestrator | Tuesday 13 May 2025 19:53:44 +0000 (0:00:00.201) 0:00:21.722 *********** 2025-05-13 19:53:44.808135 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:44.809853 | orchestrator | 2025-05-13 19:53:44.810654 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-13 19:53:44.813346 | orchestrator | Tuesday 13 May 2025 19:53:44 +0000 (0:00:00.213) 0:00:21.936 *********** 2025-05-13 19:53:45.146371 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-05-13 19:53:45.148009 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-05-13 19:53:45.148135 | orchestrator | 2025-05-13 19:53:45.149402 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-13 19:53:45.150375 | orchestrator | Tuesday 13 May 2025 19:53:45 +0000 (0:00:00.340) 0:00:22.276 *********** 2025-05-13 19:53:45.291992 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:45.292910 | orchestrator | 2025-05-13 19:53:45.294507 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-13 19:53:45.298773 | orchestrator | Tuesday 13 May 2025 19:53:45 +0000 (0:00:00.147) 0:00:22.423 *********** 2025-05-13 19:53:45.433225 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:45.436290 | orchestrator | 2025-05-13 19:53:45.439026 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-13 19:53:45.439063 | orchestrator | Tuesday 13 May 2025 19:53:45 +0000 (0:00:00.141) 0:00:22.565 *********** 2025-05-13 19:53:45.576955 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:45.579991 | orchestrator | 2025-05-13 19:53:45.582584 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-13 19:53:45.584261 | orchestrator | Tuesday 13 May 2025 19:53:45 +0000 (0:00:00.138) 0:00:22.704 *********** 2025-05-13 19:53:45.710863 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:53:45.712053 | orchestrator | 2025-05-13 19:53:45.713843 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-13 19:53:45.717601 | orchestrator | Tuesday 13 May 2025 19:53:45 +0000 (0:00:00.138) 0:00:22.842 *********** 2025-05-13 19:53:45.876320 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c7ef241c-3ce4-53e3-9962-a0236c38cab6'}}) 2025-05-13 19:53:45.876968 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '53409cd5-715f-5221-bc58-8adc9fe4a6bc'}}) 2025-05-13 19:53:45.877890 | orchestrator | 2025-05-13 19:53:45.878926 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-13 19:53:45.882845 | orchestrator | Tuesday 13 May 2025 19:53:45 +0000 (0:00:00.165) 0:00:23.008 *********** 2025-05-13 19:53:46.028788 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c7ef241c-3ce4-53e3-9962-a0236c38cab6'}})  2025-05-13 19:53:46.029934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '53409cd5-715f-5221-bc58-8adc9fe4a6bc'}})  2025-05-13 19:53:46.030696 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:46.032795 | orchestrator | 2025-05-13 19:53:46.034056 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-13 19:53:46.034546 | orchestrator | Tuesday 13 May 2025 19:53:46 +0000 (0:00:00.151) 0:00:23.159 *********** 2025-05-13 19:53:46.170861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c7ef241c-3ce4-53e3-9962-a0236c38cab6'}})  2025-05-13 19:53:46.171716 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '53409cd5-715f-5221-bc58-8adc9fe4a6bc'}})  2025-05-13 19:53:46.172852 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:46.173566 | orchestrator | 2025-05-13 19:53:46.174123 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-13 19:53:46.177647 | orchestrator | Tuesday 13 May 2025 19:53:46 +0000 (0:00:00.144) 0:00:23.303 *********** 2025-05-13 19:53:46.316102 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c7ef241c-3ce4-53e3-9962-a0236c38cab6'}})  2025-05-13 19:53:46.317300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '53409cd5-715f-5221-bc58-8adc9fe4a6bc'}})  2025-05-13 19:53:46.317932 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:46.321945 | orchestrator | 2025-05-13 19:53:46.321993 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-13 19:53:46.322007 | orchestrator | Tuesday 13 May 2025 19:53:46 +0000 (0:00:00.144) 0:00:23.448 *********** 2025-05-13 19:53:46.450639 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:53:46.452025 | orchestrator | 2025-05-13 19:53:46.453288 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-13 19:53:46.454142 | orchestrator | Tuesday 13 May 2025 19:53:46 +0000 (0:00:00.134) 0:00:23.583 *********** 2025-05-13 19:53:46.603487 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:53:46.603856 | orchestrator | 2025-05-13 19:53:46.606809 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-13 19:53:46.607639 | orchestrator | Tuesday 13 May 2025 19:53:46 +0000 (0:00:00.152) 0:00:23.736 *********** 2025-05-13 19:53:46.737831 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:46.742089 | orchestrator | 2025-05-13 19:53:46.744508 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-13 19:53:46.745592 | orchestrator | Tuesday 13 May 2025 19:53:46 +0000 (0:00:00.133) 0:00:23.869 *********** 2025-05-13 19:53:47.070695 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:47.072674 | orchestrator | 2025-05-13 19:53:47.073870 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-13 19:53:47.075230 | orchestrator | Tuesday 13 May 2025 19:53:47 +0000 (0:00:00.332) 0:00:24.202 *********** 2025-05-13 19:53:47.204782 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:47.205727 | orchestrator | 2025-05-13 19:53:47.207651 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-13 19:53:47.209506 | orchestrator | Tuesday 13 May 2025 19:53:47 +0000 (0:00:00.134) 0:00:24.336 *********** 2025-05-13 19:53:47.354114 | orchestrator | ok: [testbed-node-4] => { 2025-05-13 19:53:47.354608 | orchestrator |  "ceph_osd_devices": { 2025-05-13 19:53:47.355899 | orchestrator |  "sdb": { 2025-05-13 19:53:47.357199 | orchestrator |  "osd_lvm_uuid": "c7ef241c-3ce4-53e3-9962-a0236c38cab6" 2025-05-13 19:53:47.358296 | orchestrator |  }, 2025-05-13 19:53:47.359617 | orchestrator |  "sdc": { 2025-05-13 19:53:47.361228 | orchestrator |  "osd_lvm_uuid": "53409cd5-715f-5221-bc58-8adc9fe4a6bc" 2025-05-13 19:53:47.361802 | orchestrator |  } 2025-05-13 19:53:47.363239 | orchestrator |  } 2025-05-13 19:53:47.364524 | orchestrator | } 2025-05-13 19:53:47.365867 | orchestrator | 2025-05-13 19:53:47.366730 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-13 19:53:47.367571 | orchestrator | Tuesday 13 May 2025 19:53:47 +0000 (0:00:00.147) 0:00:24.483 *********** 2025-05-13 19:53:47.480973 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:47.482782 | orchestrator | 2025-05-13 19:53:47.485998 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-13 19:53:47.487975 | orchestrator | Tuesday 13 May 2025 19:53:47 +0000 (0:00:00.129) 0:00:24.612 *********** 2025-05-13 19:53:47.613607 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:47.615721 | orchestrator | 2025-05-13 19:53:47.618066 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-13 19:53:47.619237 | orchestrator | Tuesday 13 May 2025 19:53:47 +0000 (0:00:00.132) 0:00:24.745 *********** 2025-05-13 19:53:47.746922 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:53:47.750710 | orchestrator | 2025-05-13 19:53:47.752273 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-13 19:53:47.754561 | orchestrator | Tuesday 13 May 2025 19:53:47 +0000 (0:00:00.132) 0:00:24.877 *********** 2025-05-13 19:53:47.945111 | orchestrator | changed: [testbed-node-4] => { 2025-05-13 19:53:47.945765 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-13 19:53:47.946893 | orchestrator |  "ceph_osd_devices": { 2025-05-13 19:53:47.948547 | orchestrator |  "sdb": { 2025-05-13 19:53:47.950459 | orchestrator |  "osd_lvm_uuid": "c7ef241c-3ce4-53e3-9962-a0236c38cab6" 2025-05-13 19:53:47.952243 | orchestrator |  }, 2025-05-13 19:53:47.953727 | orchestrator |  "sdc": { 2025-05-13 19:53:47.954953 | orchestrator |  "osd_lvm_uuid": "53409cd5-715f-5221-bc58-8adc9fe4a6bc" 2025-05-13 19:53:47.955911 | orchestrator |  } 2025-05-13 19:53:47.957559 | orchestrator |  }, 2025-05-13 19:53:47.958635 | orchestrator |  "lvm_volumes": [ 2025-05-13 19:53:47.959896 | orchestrator |  { 2025-05-13 19:53:47.960744 | orchestrator |  "data": "osd-block-c7ef241c-3ce4-53e3-9962-a0236c38cab6", 2025-05-13 19:53:47.962776 | orchestrator |  "data_vg": "ceph-c7ef241c-3ce4-53e3-9962-a0236c38cab6" 2025-05-13 19:53:47.963887 | orchestrator |  }, 2025-05-13 19:53:47.964787 | orchestrator |  { 2025-05-13 19:53:47.965377 | orchestrator |  "data": "osd-block-53409cd5-715f-5221-bc58-8adc9fe4a6bc", 2025-05-13 19:53:47.966464 | orchestrator |  "data_vg": "ceph-53409cd5-715f-5221-bc58-8adc9fe4a6bc" 2025-05-13 19:53:47.966782 | orchestrator |  } 2025-05-13 19:53:47.967568 | orchestrator |  ] 2025-05-13 19:53:47.968444 | orchestrator |  } 2025-05-13 19:53:47.968765 | orchestrator | } 2025-05-13 19:53:47.969598 | orchestrator | 2025-05-13 19:53:47.970101 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-13 19:53:47.970759 | orchestrator | Tuesday 13 May 2025 19:53:47 +0000 (0:00:00.200) 0:00:25.078 *********** 2025-05-13 19:53:49.104950 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-13 19:53:49.105087 | orchestrator | 2025-05-13 19:53:49.105227 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-13 19:53:49.105595 | orchestrator | 2025-05-13 19:53:49.106728 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-13 19:53:49.106763 | orchestrator | Tuesday 13 May 2025 19:53:49 +0000 (0:00:01.158) 0:00:26.236 *********** 2025-05-13 19:53:49.558748 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-13 19:53:49.559996 | orchestrator | 2025-05-13 19:53:49.561382 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-13 19:53:49.566114 | orchestrator | Tuesday 13 May 2025 19:53:49 +0000 (0:00:00.454) 0:00:26.690 *********** 2025-05-13 19:53:50.277379 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:53:50.278740 | orchestrator | 2025-05-13 19:53:50.279737 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:50.282929 | orchestrator | Tuesday 13 May 2025 19:53:50 +0000 (0:00:00.717) 0:00:27.408 *********** 2025-05-13 19:53:50.662124 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-13 19:53:50.664238 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-13 19:53:50.665654 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-13 19:53:50.669989 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-13 19:53:50.671108 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-13 19:53:50.672822 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-13 19:53:50.673440 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-13 19:53:50.674519 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-13 19:53:50.675815 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-13 19:53:50.676047 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-13 19:53:50.676979 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-13 19:53:50.677337 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-13 19:53:50.680930 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-13 19:53:50.681319 | orchestrator | 2025-05-13 19:53:50.681841 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:50.682564 | orchestrator | Tuesday 13 May 2025 19:53:50 +0000 (0:00:00.383) 0:00:27.792 *********** 2025-05-13 19:53:50.875827 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:53:50.875948 | orchestrator | 2025-05-13 19:53:50.876881 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:50.877753 | orchestrator | Tuesday 13 May 2025 19:53:50 +0000 (0:00:00.214) 0:00:28.006 *********** 2025-05-13 19:53:51.107248 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:53:51.107850 | orchestrator | 2025-05-13 19:53:51.109232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:51.110665 | orchestrator | Tuesday 13 May 2025 19:53:51 +0000 (0:00:00.233) 0:00:28.240 *********** 2025-05-13 19:53:51.313127 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:53:51.314659 | orchestrator | 2025-05-13 19:53:51.317868 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:51.317946 | orchestrator | Tuesday 13 May 2025 19:53:51 +0000 (0:00:00.202) 0:00:28.443 *********** 2025-05-13 19:53:51.511564 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:53:51.512583 | orchestrator | 2025-05-13 19:53:51.516770 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:51.516801 | orchestrator | Tuesday 13 May 2025 19:53:51 +0000 (0:00:00.199) 0:00:28.642 *********** 2025-05-13 19:53:51.707360 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:53:51.709746 | orchestrator | 2025-05-13 19:53:51.713761 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:51.713788 | orchestrator | Tuesday 13 May 2025 19:53:51 +0000 (0:00:00.196) 0:00:28.839 *********** 2025-05-13 19:53:51.950528 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:53:51.952468 | orchestrator | 2025-05-13 19:53:51.955739 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:51.955776 | orchestrator | Tuesday 13 May 2025 19:53:51 +0000 (0:00:00.241) 0:00:29.081 *********** 2025-05-13 19:53:52.195179 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:53:52.198355 | orchestrator | 2025-05-13 19:53:52.198464 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:52.198480 | orchestrator | Tuesday 13 May 2025 19:53:52 +0000 (0:00:00.245) 0:00:29.326 *********** 2025-05-13 19:53:52.387928 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:53:52.388098 | orchestrator | 2025-05-13 19:53:52.389493 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:52.390004 | orchestrator | Tuesday 13 May 2025 19:53:52 +0000 (0:00:00.192) 0:00:29.519 *********** 2025-05-13 19:53:52.999224 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af) 2025-05-13 19:53:52.999318 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af) 2025-05-13 19:53:53.001785 | orchestrator | 2025-05-13 19:53:53.002197 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:53.002797 | orchestrator | Tuesday 13 May 2025 19:53:52 +0000 (0:00:00.611) 0:00:30.130 *********** 2025-05-13 19:53:53.798507 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0bd34d58-f920-45be-9e9c-4745e29ec711) 2025-05-13 19:53:53.799106 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0bd34d58-f920-45be-9e9c-4745e29ec711) 2025-05-13 19:53:53.799728 | orchestrator | 2025-05-13 19:53:53.801658 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:53.804061 | orchestrator | Tuesday 13 May 2025 19:53:53 +0000 (0:00:00.799) 0:00:30.929 *********** 2025-05-13 19:53:54.202757 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5a89f530-918e-4949-9347-1038fd288b0d) 2025-05-13 19:53:54.205313 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5a89f530-918e-4949-9347-1038fd288b0d) 2025-05-13 19:53:54.207292 | orchestrator | 2025-05-13 19:53:54.211125 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:54.211711 | orchestrator | Tuesday 13 May 2025 19:53:54 +0000 (0:00:00.404) 0:00:31.334 *********** 2025-05-13 19:53:54.672092 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_10c33077-7b2d-46df-acf0-04e3d7859f61) 2025-05-13 19:53:54.672204 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_10c33077-7b2d-46df-acf0-04e3d7859f61) 2025-05-13 19:53:54.674269 | orchestrator | 2025-05-13 19:53:54.675825 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:53:54.679970 | orchestrator | Tuesday 13 May 2025 19:53:54 +0000 (0:00:00.467) 0:00:31.801 *********** 2025-05-13 19:53:54.990295 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-13 19:53:54.990852 | orchestrator | 2025-05-13 19:53:54.991927 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:54.991937 | orchestrator | Tuesday 13 May 2025 19:53:54 +0000 (0:00:00.319) 0:00:32.121 *********** 2025-05-13 19:53:55.382532 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-13 19:53:55.383293 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-13 19:53:55.383614 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-13 19:53:55.388284 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-13 19:53:55.388339 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-13 19:53:55.388943 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-13 19:53:55.390100 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-13 19:53:55.390717 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-13 19:53:55.392039 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-13 19:53:55.392427 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-13 19:53:55.395929 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-13 19:53:55.396282 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-13 19:53:55.396852 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-13 19:53:55.397574 | orchestrator | 2025-05-13 19:53:55.398242 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:55.399046 | orchestrator | Tuesday 13 May 2025 19:53:55 +0000 (0:00:00.392) 0:00:32.513 *********** 2025-05-13 19:53:55.590944 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:53:55.591753 | orchestrator | 2025-05-13 19:53:55.593309 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:55.594614 | orchestrator | Tuesday 13 May 2025 19:53:55 +0000 (0:00:00.209) 0:00:32.723 *********** 2025-05-13 19:53:55.808347 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:53:55.811605 | orchestrator | 2025-05-13 19:53:55.814286 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:55.814418 | orchestrator | Tuesday 13 May 2025 19:53:55 +0000 (0:00:00.216) 0:00:32.940 *********** 2025-05-13 19:53:56.022287 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:53:56.022432 | orchestrator | 2025-05-13 19:53:56.024099 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:56.024971 | orchestrator | Tuesday 13 May 2025 19:53:56 +0000 (0:00:00.209) 0:00:33.149 *********** 2025-05-13 19:53:56.223254 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:53:56.225302 | orchestrator | 2025-05-13 19:53:56.226296 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:56.227403 | orchestrator | Tuesday 13 May 2025 19:53:56 +0000 (0:00:00.200) 0:00:33.349 *********** 2025-05-13 19:53:56.415232 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:53:56.417530 | orchestrator | 2025-05-13 19:53:56.418460 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:56.419997 | orchestrator | Tuesday 13 May 2025 19:53:56 +0000 (0:00:00.197) 0:00:33.547 *********** 2025-05-13 19:53:57.197423 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:53:57.198326 | orchestrator | 2025-05-13 19:53:57.199769 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:57.200196 | orchestrator | Tuesday 13 May 2025 19:53:57 +0000 (0:00:00.782) 0:00:34.329 *********** 2025-05-13 19:53:57.415138 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:53:57.415282 | orchestrator | 2025-05-13 19:53:57.415364 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:57.416102 | orchestrator | Tuesday 13 May 2025 19:53:57 +0000 (0:00:00.218) 0:00:34.547 *********** 2025-05-13 19:53:57.632349 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:53:57.632991 | orchestrator | 2025-05-13 19:53:57.634462 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:57.637040 | orchestrator | Tuesday 13 May 2025 19:53:57 +0000 (0:00:00.217) 0:00:34.764 *********** 2025-05-13 19:53:58.326654 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-13 19:53:58.327608 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-13 19:53:58.327755 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-13 19:53:58.328839 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-13 19:53:58.330643 | orchestrator | 2025-05-13 19:53:58.330692 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:58.331499 | orchestrator | Tuesday 13 May 2025 19:53:58 +0000 (0:00:00.693) 0:00:35.457 *********** 2025-05-13 19:53:58.543158 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:53:58.543319 | orchestrator | 2025-05-13 19:53:58.544014 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:58.545116 | orchestrator | Tuesday 13 May 2025 19:53:58 +0000 (0:00:00.217) 0:00:35.674 *********** 2025-05-13 19:53:58.783424 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:53:58.783820 | orchestrator | 2025-05-13 19:53:58.784344 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:58.785144 | orchestrator | Tuesday 13 May 2025 19:53:58 +0000 (0:00:00.241) 0:00:35.915 *********** 2025-05-13 19:53:59.039279 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:53:59.042081 | orchestrator | 2025-05-13 19:53:59.042611 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:53:59.043438 | orchestrator | Tuesday 13 May 2025 19:53:59 +0000 (0:00:00.253) 0:00:36.169 *********** 2025-05-13 19:53:59.265389 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:53:59.265797 | orchestrator | 2025-05-13 19:53:59.266688 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-13 19:53:59.267795 | orchestrator | Tuesday 13 May 2025 19:53:59 +0000 (0:00:00.227) 0:00:36.397 *********** 2025-05-13 19:53:59.523695 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-05-13 19:53:59.523868 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-05-13 19:53:59.525477 | orchestrator | 2025-05-13 19:53:59.526629 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-13 19:53:59.528889 | orchestrator | Tuesday 13 May 2025 19:53:59 +0000 (0:00:00.258) 0:00:36.656 *********** 2025-05-13 19:53:59.649356 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:53:59.649496 | orchestrator | 2025-05-13 19:53:59.649718 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-13 19:53:59.650804 | orchestrator | Tuesday 13 May 2025 19:53:59 +0000 (0:00:00.125) 0:00:36.781 *********** 2025-05-13 19:53:59.791984 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:53:59.794501 | orchestrator | 2025-05-13 19:53:59.795959 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-13 19:53:59.796861 | orchestrator | Tuesday 13 May 2025 19:53:59 +0000 (0:00:00.143) 0:00:36.924 *********** 2025-05-13 19:53:59.937857 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:53:59.939609 | orchestrator | 2025-05-13 19:53:59.939666 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-13 19:53:59.940551 | orchestrator | Tuesday 13 May 2025 19:53:59 +0000 (0:00:00.144) 0:00:37.069 *********** 2025-05-13 19:54:00.310663 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:54:00.310897 | orchestrator | 2025-05-13 19:54:00.310931 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-13 19:54:00.311205 | orchestrator | Tuesday 13 May 2025 19:54:00 +0000 (0:00:00.368) 0:00:37.438 *********** 2025-05-13 19:54:00.496798 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9e27190a-cad1-5451-a880-ae60fcff608c'}}) 2025-05-13 19:54:00.497027 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6f4317e9-8e5a-55d6-81df-460521249898'}}) 2025-05-13 19:54:00.499020 | orchestrator | 2025-05-13 19:54:00.501631 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-13 19:54:00.502955 | orchestrator | Tuesday 13 May 2025 19:54:00 +0000 (0:00:00.188) 0:00:37.626 *********** 2025-05-13 19:54:00.657956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9e27190a-cad1-5451-a880-ae60fcff608c'}})  2025-05-13 19:54:00.658115 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6f4317e9-8e5a-55d6-81df-460521249898'}})  2025-05-13 19:54:00.658508 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:54:00.658976 | orchestrator | 2025-05-13 19:54:00.659281 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-13 19:54:00.659967 | orchestrator | Tuesday 13 May 2025 19:54:00 +0000 (0:00:00.164) 0:00:37.790 *********** 2025-05-13 19:54:00.809452 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9e27190a-cad1-5451-a880-ae60fcff608c'}})  2025-05-13 19:54:00.809648 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6f4317e9-8e5a-55d6-81df-460521249898'}})  2025-05-13 19:54:00.810940 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:54:00.811862 | orchestrator | 2025-05-13 19:54:00.812968 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-13 19:54:00.813593 | orchestrator | Tuesday 13 May 2025 19:54:00 +0000 (0:00:00.148) 0:00:37.939 *********** 2025-05-13 19:54:00.954300 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9e27190a-cad1-5451-a880-ae60fcff608c'}})  2025-05-13 19:54:00.954533 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6f4317e9-8e5a-55d6-81df-460521249898'}})  2025-05-13 19:54:00.956301 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:54:00.959751 | orchestrator | 2025-05-13 19:54:00.960421 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-13 19:54:00.961390 | orchestrator | Tuesday 13 May 2025 19:54:00 +0000 (0:00:00.146) 0:00:38.086 *********** 2025-05-13 19:54:01.085733 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:54:01.086835 | orchestrator | 2025-05-13 19:54:01.088506 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-13 19:54:01.088538 | orchestrator | Tuesday 13 May 2025 19:54:01 +0000 (0:00:00.130) 0:00:38.217 *********** 2025-05-13 19:54:01.230404 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:54:01.231445 | orchestrator | 2025-05-13 19:54:01.232572 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-13 19:54:01.234086 | orchestrator | Tuesday 13 May 2025 19:54:01 +0000 (0:00:00.145) 0:00:38.362 *********** 2025-05-13 19:54:01.371915 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:54:01.373074 | orchestrator | 2025-05-13 19:54:01.373790 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-13 19:54:01.375126 | orchestrator | Tuesday 13 May 2025 19:54:01 +0000 (0:00:00.141) 0:00:38.504 *********** 2025-05-13 19:54:01.517702 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:54:01.517891 | orchestrator | 2025-05-13 19:54:01.518833 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-13 19:54:01.519750 | orchestrator | Tuesday 13 May 2025 19:54:01 +0000 (0:00:00.145) 0:00:38.649 *********** 2025-05-13 19:54:01.653938 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:54:01.654166 | orchestrator | 2025-05-13 19:54:01.655652 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-13 19:54:01.656479 | orchestrator | Tuesday 13 May 2025 19:54:01 +0000 (0:00:00.135) 0:00:38.785 *********** 2025-05-13 19:54:01.792347 | orchestrator | ok: [testbed-node-5] => { 2025-05-13 19:54:01.792897 | orchestrator |  "ceph_osd_devices": { 2025-05-13 19:54:01.793961 | orchestrator |  "sdb": { 2025-05-13 19:54:01.795455 | orchestrator |  "osd_lvm_uuid": "9e27190a-cad1-5451-a880-ae60fcff608c" 2025-05-13 19:54:01.796135 | orchestrator |  }, 2025-05-13 19:54:01.796907 | orchestrator |  "sdc": { 2025-05-13 19:54:01.798253 | orchestrator |  "osd_lvm_uuid": "6f4317e9-8e5a-55d6-81df-460521249898" 2025-05-13 19:54:01.799691 | orchestrator |  } 2025-05-13 19:54:01.800208 | orchestrator |  } 2025-05-13 19:54:01.801242 | orchestrator | } 2025-05-13 19:54:01.801725 | orchestrator | 2025-05-13 19:54:01.802709 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-13 19:54:01.803013 | orchestrator | Tuesday 13 May 2025 19:54:01 +0000 (0:00:00.139) 0:00:38.924 *********** 2025-05-13 19:54:01.926321 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:54:01.928848 | orchestrator | 2025-05-13 19:54:01.929603 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-13 19:54:01.930693 | orchestrator | Tuesday 13 May 2025 19:54:01 +0000 (0:00:00.134) 0:00:39.058 *********** 2025-05-13 19:54:02.272051 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:54:02.272596 | orchestrator | 2025-05-13 19:54:02.273155 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-13 19:54:02.274482 | orchestrator | Tuesday 13 May 2025 19:54:02 +0000 (0:00:00.343) 0:00:39.402 *********** 2025-05-13 19:54:02.404773 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:54:02.405451 | orchestrator | 2025-05-13 19:54:02.405899 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-13 19:54:02.406927 | orchestrator | Tuesday 13 May 2025 19:54:02 +0000 (0:00:00.133) 0:00:39.536 *********** 2025-05-13 19:54:02.624039 | orchestrator | changed: [testbed-node-5] => { 2025-05-13 19:54:02.624439 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-13 19:54:02.625205 | orchestrator |  "ceph_osd_devices": { 2025-05-13 19:54:02.626145 | orchestrator |  "sdb": { 2025-05-13 19:54:02.627005 | orchestrator |  "osd_lvm_uuid": "9e27190a-cad1-5451-a880-ae60fcff608c" 2025-05-13 19:54:02.627703 | orchestrator |  }, 2025-05-13 19:54:02.628652 | orchestrator |  "sdc": { 2025-05-13 19:54:02.629025 | orchestrator |  "osd_lvm_uuid": "6f4317e9-8e5a-55d6-81df-460521249898" 2025-05-13 19:54:02.629637 | orchestrator |  } 2025-05-13 19:54:02.630243 | orchestrator |  }, 2025-05-13 19:54:02.630741 | orchestrator |  "lvm_volumes": [ 2025-05-13 19:54:02.631489 | orchestrator |  { 2025-05-13 19:54:02.633326 | orchestrator |  "data": "osd-block-9e27190a-cad1-5451-a880-ae60fcff608c", 2025-05-13 19:54:02.633727 | orchestrator |  "data_vg": "ceph-9e27190a-cad1-5451-a880-ae60fcff608c" 2025-05-13 19:54:02.634064 | orchestrator |  }, 2025-05-13 19:54:02.634592 | orchestrator |  { 2025-05-13 19:54:02.635022 | orchestrator |  "data": "osd-block-6f4317e9-8e5a-55d6-81df-460521249898", 2025-05-13 19:54:02.635737 | orchestrator |  "data_vg": "ceph-6f4317e9-8e5a-55d6-81df-460521249898" 2025-05-13 19:54:02.635994 | orchestrator |  } 2025-05-13 19:54:02.636386 | orchestrator |  ] 2025-05-13 19:54:02.636815 | orchestrator |  } 2025-05-13 19:54:02.637540 | orchestrator | } 2025-05-13 19:54:02.637840 | orchestrator | 2025-05-13 19:54:02.638380 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-13 19:54:02.638845 | orchestrator | Tuesday 13 May 2025 19:54:02 +0000 (0:00:00.219) 0:00:39.756 *********** 2025-05-13 19:54:03.624679 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-13 19:54:03.625486 | orchestrator | 2025-05-13 19:54:03.626898 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:54:03.627298 | orchestrator | 2025-05-13 19:54:03 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:54:03.627395 | orchestrator | 2025-05-13 19:54:03 | INFO  | Please wait and do not abort execution. 2025-05-13 19:54:03.628434 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-13 19:54:03.629374 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-13 19:54:03.629661 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-13 19:54:03.630765 | orchestrator | 2025-05-13 19:54:03.631859 | orchestrator | 2025-05-13 19:54:03.632917 | orchestrator | 2025-05-13 19:54:03.633278 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 19:54:03.634154 | orchestrator | Tuesday 13 May 2025 19:54:03 +0000 (0:00:00.998) 0:00:40.755 *********** 2025-05-13 19:54:03.634900 | orchestrator | =============================================================================== 2025-05-13 19:54:03.635550 | orchestrator | Write configuration file ------------------------------------------------ 4.04s 2025-05-13 19:54:03.635945 | orchestrator | Get initial list of available block devices ----------------------------- 1.23s 2025-05-13 19:54:03.636794 | orchestrator | Add known links to the list of available block devices ------------------ 1.12s 2025-05-13 19:54:03.637496 | orchestrator | Add known partitions to the list of available block devices ------------- 1.10s 2025-05-13 19:54:03.637852 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.91s 2025-05-13 19:54:03.638689 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2025-05-13 19:54:03.639121 | orchestrator | Add known partitions to the list of available block devices ------------- 0.79s 2025-05-13 19:54:03.639614 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2025-05-13 19:54:03.639988 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.76s 2025-05-13 19:54:03.640778 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2025-05-13 19:54:03.641675 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-05-13 19:54:03.641991 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.64s 2025-05-13 19:54:03.642586 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.64s 2025-05-13 19:54:03.642905 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2025-05-13 19:54:03.643397 | orchestrator | Print configuration data ------------------------------------------------ 0.62s 2025-05-13 19:54:03.643790 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-05-13 19:54:03.644473 | orchestrator | Set WAL devices config data --------------------------------------------- 0.60s 2025-05-13 19:54:03.645440 | orchestrator | Print DB devices -------------------------------------------------------- 0.57s 2025-05-13 19:54:03.645910 | orchestrator | Add known links to the list of available block devices ------------------ 0.56s 2025-05-13 19:54:03.646283 | orchestrator | Add known links to the list of available block devices ------------------ 0.53s 2025-05-13 19:54:16.120752 | orchestrator | 2025-05-13 19:54:16 | INFO  | Task 12bfbee0-e994-4020-9c1c-05ced8a8e171 is running in background. Output coming soon. 2025-05-13 19:55:01.266975 | orchestrator | 2025-05-13 19:54:47 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-05-13 19:55:01.267130 | orchestrator | 2025-05-13 19:54:47 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-05-13 19:55:01.267317 | orchestrator | 2025-05-13 19:54:47 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-05-13 19:55:01.267342 | orchestrator | 2025-05-13 19:54:48 | INFO  | Handling group overwrites in 99-overwrite 2025-05-13 19:55:01.267378 | orchestrator | 2025-05-13 19:54:48 | INFO  | Removing group frr:children from 60-generic 2025-05-13 19:55:01.267397 | orchestrator | 2025-05-13 19:54:48 | INFO  | Removing group storage:children from 50-kolla 2025-05-13 19:55:01.267414 | orchestrator | 2025-05-13 19:54:48 | INFO  | Removing group netbird:children from 50-infrastruture 2025-05-13 19:55:01.267433 | orchestrator | 2025-05-13 19:54:48 | INFO  | Removing group ceph-mds from 50-ceph 2025-05-13 19:55:01.267452 | orchestrator | 2025-05-13 19:54:48 | INFO  | Removing group ceph-rgw from 50-ceph 2025-05-13 19:55:01.267474 | orchestrator | 2025-05-13 19:54:48 | INFO  | Handling group overwrites in 20-roles 2025-05-13 19:55:01.267496 | orchestrator | 2025-05-13 19:54:48 | INFO  | Removing group k3s_node from 50-infrastruture 2025-05-13 19:55:01.267518 | orchestrator | 2025-05-13 19:54:49 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-05-13 19:55:01.267540 | orchestrator | 2025-05-13 19:55:00 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-05-13 19:55:03.301706 | orchestrator | 2025-05-13 19:55:03 | INFO  | Task e6021a25-6ebf-4d24-9711-f2b4c346de22 (ceph-create-lvm-devices) was prepared for execution. 2025-05-13 19:55:03.301815 | orchestrator | 2025-05-13 19:55:03 | INFO  | It takes a moment until task e6021a25-6ebf-4d24-9711-f2b4c346de22 (ceph-create-lvm-devices) has been started and output is visible here. 2025-05-13 19:55:07.754805 | orchestrator | 2025-05-13 19:55:07.754932 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-13 19:55:07.755201 | orchestrator | 2025-05-13 19:55:07.756681 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-13 19:55:07.756714 | orchestrator | Tuesday 13 May 2025 19:55:07 +0000 (0:00:00.310) 0:00:00.310 *********** 2025-05-13 19:55:07.982645 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-13 19:55:07.982889 | orchestrator | 2025-05-13 19:55:07.983952 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-13 19:55:07.985361 | orchestrator | Tuesday 13 May 2025 19:55:07 +0000 (0:00:00.229) 0:00:00.539 *********** 2025-05-13 19:55:08.234002 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:55:08.235386 | orchestrator | 2025-05-13 19:55:08.235881 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:08.237198 | orchestrator | Tuesday 13 May 2025 19:55:08 +0000 (0:00:00.252) 0:00:00.791 *********** 2025-05-13 19:55:08.688251 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-13 19:55:08.688501 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-13 19:55:08.689626 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-13 19:55:08.690896 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-13 19:55:08.693265 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-13 19:55:08.693676 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-13 19:55:08.696576 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-13 19:55:08.697835 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-13 19:55:08.699299 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-13 19:55:08.700217 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-13 19:55:08.701102 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-13 19:55:08.701763 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-13 19:55:08.702364 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-13 19:55:08.703273 | orchestrator | 2025-05-13 19:55:08.703862 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:08.704502 | orchestrator | Tuesday 13 May 2025 19:55:08 +0000 (0:00:00.454) 0:00:01.246 *********** 2025-05-13 19:55:09.160402 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:09.160634 | orchestrator | 2025-05-13 19:55:09.161431 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:09.163538 | orchestrator | Tuesday 13 May 2025 19:55:09 +0000 (0:00:00.470) 0:00:01.717 *********** 2025-05-13 19:55:09.360821 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:09.361499 | orchestrator | 2025-05-13 19:55:09.362098 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:09.363079 | orchestrator | Tuesday 13 May 2025 19:55:09 +0000 (0:00:00.202) 0:00:01.919 *********** 2025-05-13 19:55:09.549600 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:09.550111 | orchestrator | 2025-05-13 19:55:09.550905 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:09.551628 | orchestrator | Tuesday 13 May 2025 19:55:09 +0000 (0:00:00.188) 0:00:02.108 *********** 2025-05-13 19:55:09.744817 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:09.745231 | orchestrator | 2025-05-13 19:55:09.745741 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:09.746328 | orchestrator | Tuesday 13 May 2025 19:55:09 +0000 (0:00:00.194) 0:00:02.303 *********** 2025-05-13 19:55:09.950635 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:09.953852 | orchestrator | 2025-05-13 19:55:09.954904 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:09.955342 | orchestrator | Tuesday 13 May 2025 19:55:09 +0000 (0:00:00.203) 0:00:02.507 *********** 2025-05-13 19:55:10.166216 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:10.166330 | orchestrator | 2025-05-13 19:55:10.166463 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:10.166973 | orchestrator | Tuesday 13 May 2025 19:55:10 +0000 (0:00:00.217) 0:00:02.724 *********** 2025-05-13 19:55:10.350438 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:10.350646 | orchestrator | 2025-05-13 19:55:10.350725 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:10.351332 | orchestrator | Tuesday 13 May 2025 19:55:10 +0000 (0:00:00.184) 0:00:02.909 *********** 2025-05-13 19:55:10.573560 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:10.576673 | orchestrator | 2025-05-13 19:55:10.578365 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:10.578508 | orchestrator | Tuesday 13 May 2025 19:55:10 +0000 (0:00:00.215) 0:00:03.125 *********** 2025-05-13 19:55:10.959375 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b) 2025-05-13 19:55:10.959524 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b) 2025-05-13 19:55:10.960620 | orchestrator | 2025-05-13 19:55:10.961552 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:10.962487 | orchestrator | Tuesday 13 May 2025 19:55:10 +0000 (0:00:00.389) 0:00:03.515 *********** 2025-05-13 19:55:11.357685 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_34a01356-b2ad-4692-b4fa-0e371ae7ecbd) 2025-05-13 19:55:11.358347 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_34a01356-b2ad-4692-b4fa-0e371ae7ecbd) 2025-05-13 19:55:11.358991 | orchestrator | 2025-05-13 19:55:11.360274 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:11.360960 | orchestrator | Tuesday 13 May 2025 19:55:11 +0000 (0:00:00.400) 0:00:03.915 *********** 2025-05-13 19:55:11.966939 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ca00bcd5-8e8a-4b90-8497-af6d74b86161) 2025-05-13 19:55:11.967187 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ca00bcd5-8e8a-4b90-8497-af6d74b86161) 2025-05-13 19:55:11.967214 | orchestrator | 2025-05-13 19:55:11.967957 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:11.969230 | orchestrator | Tuesday 13 May 2025 19:55:11 +0000 (0:00:00.606) 0:00:04.522 *********** 2025-05-13 19:55:12.811857 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_04d2f464-e449-42d7-9ceb-0224b6b42ef4) 2025-05-13 19:55:12.812713 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_04d2f464-e449-42d7-9ceb-0224b6b42ef4) 2025-05-13 19:55:12.814275 | orchestrator | 2025-05-13 19:55:12.814888 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:12.815461 | orchestrator | Tuesday 13 May 2025 19:55:12 +0000 (0:00:00.848) 0:00:05.370 *********** 2025-05-13 19:55:13.150684 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-13 19:55:13.150849 | orchestrator | 2025-05-13 19:55:13.152313 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:13.152524 | orchestrator | Tuesday 13 May 2025 19:55:13 +0000 (0:00:00.339) 0:00:05.709 *********** 2025-05-13 19:55:13.556884 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-13 19:55:13.557872 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-13 19:55:13.559191 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-13 19:55:13.560003 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-13 19:55:13.560651 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-13 19:55:13.561738 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-13 19:55:13.563086 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-13 19:55:13.564364 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-13 19:55:13.565435 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-13 19:55:13.566264 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-13 19:55:13.567232 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-13 19:55:13.568345 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-13 19:55:13.568897 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-13 19:55:13.569410 | orchestrator | 2025-05-13 19:55:13.569867 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:13.570545 | orchestrator | Tuesday 13 May 2025 19:55:13 +0000 (0:00:00.405) 0:00:06.115 *********** 2025-05-13 19:55:13.762225 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:13.762944 | orchestrator | 2025-05-13 19:55:13.764006 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:13.765853 | orchestrator | Tuesday 13 May 2025 19:55:13 +0000 (0:00:00.204) 0:00:06.320 *********** 2025-05-13 19:55:13.951824 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:13.952378 | orchestrator | 2025-05-13 19:55:13.953450 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:13.955381 | orchestrator | Tuesday 13 May 2025 19:55:13 +0000 (0:00:00.188) 0:00:06.509 *********** 2025-05-13 19:55:14.142464 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:14.142570 | orchestrator | 2025-05-13 19:55:14.142877 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:14.143936 | orchestrator | Tuesday 13 May 2025 19:55:14 +0000 (0:00:00.190) 0:00:06.700 *********** 2025-05-13 19:55:14.329414 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:14.330262 | orchestrator | 2025-05-13 19:55:14.330802 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:14.331057 | orchestrator | Tuesday 13 May 2025 19:55:14 +0000 (0:00:00.187) 0:00:06.887 *********** 2025-05-13 19:55:14.524590 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:14.524700 | orchestrator | 2025-05-13 19:55:14.525712 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:14.527540 | orchestrator | Tuesday 13 May 2025 19:55:14 +0000 (0:00:00.194) 0:00:07.082 *********** 2025-05-13 19:55:14.712845 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:14.713014 | orchestrator | 2025-05-13 19:55:14.714534 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:14.715242 | orchestrator | Tuesday 13 May 2025 19:55:14 +0000 (0:00:00.188) 0:00:07.271 *********** 2025-05-13 19:55:14.906995 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:14.907840 | orchestrator | 2025-05-13 19:55:14.909266 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:14.909508 | orchestrator | Tuesday 13 May 2025 19:55:14 +0000 (0:00:00.192) 0:00:07.463 *********** 2025-05-13 19:55:15.113073 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:15.115615 | orchestrator | 2025-05-13 19:55:15.116063 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:15.117256 | orchestrator | Tuesday 13 May 2025 19:55:15 +0000 (0:00:00.206) 0:00:07.670 *********** 2025-05-13 19:55:16.189736 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-13 19:55:16.190617 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-13 19:55:16.192398 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-13 19:55:16.192836 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-13 19:55:16.193820 | orchestrator | 2025-05-13 19:55:16.193881 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:16.194356 | orchestrator | Tuesday 13 May 2025 19:55:16 +0000 (0:00:01.076) 0:00:08.746 *********** 2025-05-13 19:55:16.402841 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:16.403410 | orchestrator | 2025-05-13 19:55:16.405177 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:16.406111 | orchestrator | Tuesday 13 May 2025 19:55:16 +0000 (0:00:00.214) 0:00:08.960 *********** 2025-05-13 19:55:16.613691 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:16.614250 | orchestrator | 2025-05-13 19:55:16.615275 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:16.616710 | orchestrator | Tuesday 13 May 2025 19:55:16 +0000 (0:00:00.211) 0:00:09.172 *********** 2025-05-13 19:55:16.823317 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:16.823560 | orchestrator | 2025-05-13 19:55:16.824853 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:16.826541 | orchestrator | Tuesday 13 May 2025 19:55:16 +0000 (0:00:00.208) 0:00:09.380 *********** 2025-05-13 19:55:17.030370 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:17.031645 | orchestrator | 2025-05-13 19:55:17.032311 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-13 19:55:17.033327 | orchestrator | Tuesday 13 May 2025 19:55:17 +0000 (0:00:00.206) 0:00:09.587 *********** 2025-05-13 19:55:17.165876 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:17.166506 | orchestrator | 2025-05-13 19:55:17.167537 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-13 19:55:17.168626 | orchestrator | Tuesday 13 May 2025 19:55:17 +0000 (0:00:00.136) 0:00:09.724 *********** 2025-05-13 19:55:17.341420 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'eb14b8c1-d757-5b78-a398-3e433d34ee3e'}}) 2025-05-13 19:55:17.341529 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '55d6de5b-857a-5090-90bd-6b26b006e6c2'}}) 2025-05-13 19:55:17.342545 | orchestrator | 2025-05-13 19:55:17.343691 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-13 19:55:17.344723 | orchestrator | Tuesday 13 May 2025 19:55:17 +0000 (0:00:00.175) 0:00:09.899 *********** 2025-05-13 19:55:19.252004 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-eb14b8c1-d757-5b78-a398-3e433d34ee3e', 'data_vg': 'ceph-eb14b8c1-d757-5b78-a398-3e433d34ee3e'}) 2025-05-13 19:55:19.252154 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-55d6de5b-857a-5090-90bd-6b26b006e6c2', 'data_vg': 'ceph-55d6de5b-857a-5090-90bd-6b26b006e6c2'}) 2025-05-13 19:55:19.252287 | orchestrator | 2025-05-13 19:55:19.252683 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-13 19:55:19.252997 | orchestrator | Tuesday 13 May 2025 19:55:19 +0000 (0:00:01.908) 0:00:11.808 *********** 2025-05-13 19:55:19.400444 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eb14b8c1-d757-5b78-a398-3e433d34ee3e', 'data_vg': 'ceph-eb14b8c1-d757-5b78-a398-3e433d34ee3e'})  2025-05-13 19:55:19.400927 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55d6de5b-857a-5090-90bd-6b26b006e6c2', 'data_vg': 'ceph-55d6de5b-857a-5090-90bd-6b26b006e6c2'})  2025-05-13 19:55:19.403059 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:19.403958 | orchestrator | 2025-05-13 19:55:19.403988 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-13 19:55:19.404721 | orchestrator | Tuesday 13 May 2025 19:55:19 +0000 (0:00:00.148) 0:00:11.957 *********** 2025-05-13 19:55:20.779616 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-eb14b8c1-d757-5b78-a398-3e433d34ee3e', 'data_vg': 'ceph-eb14b8c1-d757-5b78-a398-3e433d34ee3e'}) 2025-05-13 19:55:20.780302 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-55d6de5b-857a-5090-90bd-6b26b006e6c2', 'data_vg': 'ceph-55d6de5b-857a-5090-90bd-6b26b006e6c2'}) 2025-05-13 19:55:20.781652 | orchestrator | 2025-05-13 19:55:20.782935 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-13 19:55:20.783546 | orchestrator | Tuesday 13 May 2025 19:55:20 +0000 (0:00:01.380) 0:00:13.337 *********** 2025-05-13 19:55:20.925552 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eb14b8c1-d757-5b78-a398-3e433d34ee3e', 'data_vg': 'ceph-eb14b8c1-d757-5b78-a398-3e433d34ee3e'})  2025-05-13 19:55:20.926680 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55d6de5b-857a-5090-90bd-6b26b006e6c2', 'data_vg': 'ceph-55d6de5b-857a-5090-90bd-6b26b006e6c2'})  2025-05-13 19:55:20.928737 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:20.928768 | orchestrator | 2025-05-13 19:55:20.928866 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-13 19:55:20.929472 | orchestrator | Tuesday 13 May 2025 19:55:20 +0000 (0:00:00.145) 0:00:13.482 *********** 2025-05-13 19:55:21.065416 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:21.065510 | orchestrator | 2025-05-13 19:55:21.065585 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-13 19:55:21.065852 | orchestrator | Tuesday 13 May 2025 19:55:21 +0000 (0:00:00.140) 0:00:13.622 *********** 2025-05-13 19:55:21.397351 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eb14b8c1-d757-5b78-a398-3e433d34ee3e', 'data_vg': 'ceph-eb14b8c1-d757-5b78-a398-3e433d34ee3e'})  2025-05-13 19:55:21.397566 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55d6de5b-857a-5090-90bd-6b26b006e6c2', 'data_vg': 'ceph-55d6de5b-857a-5090-90bd-6b26b006e6c2'})  2025-05-13 19:55:21.397657 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:21.399151 | orchestrator | 2025-05-13 19:55:21.399203 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-13 19:55:21.399795 | orchestrator | Tuesday 13 May 2025 19:55:21 +0000 (0:00:00.330) 0:00:13.953 *********** 2025-05-13 19:55:21.530903 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:21.531037 | orchestrator | 2025-05-13 19:55:21.531635 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-13 19:55:21.532327 | orchestrator | Tuesday 13 May 2025 19:55:21 +0000 (0:00:00.136) 0:00:14.089 *********** 2025-05-13 19:55:21.675027 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eb14b8c1-d757-5b78-a398-3e433d34ee3e', 'data_vg': 'ceph-eb14b8c1-d757-5b78-a398-3e433d34ee3e'})  2025-05-13 19:55:21.675998 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55d6de5b-857a-5090-90bd-6b26b006e6c2', 'data_vg': 'ceph-55d6de5b-857a-5090-90bd-6b26b006e6c2'})  2025-05-13 19:55:21.677831 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:21.678599 | orchestrator | 2025-05-13 19:55:21.679567 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-13 19:55:21.680704 | orchestrator | Tuesday 13 May 2025 19:55:21 +0000 (0:00:00.143) 0:00:14.233 *********** 2025-05-13 19:55:21.806382 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:21.807332 | orchestrator | 2025-05-13 19:55:21.809144 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-13 19:55:21.810123 | orchestrator | Tuesday 13 May 2025 19:55:21 +0000 (0:00:00.132) 0:00:14.365 *********** 2025-05-13 19:55:21.948147 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eb14b8c1-d757-5b78-a398-3e433d34ee3e', 'data_vg': 'ceph-eb14b8c1-d757-5b78-a398-3e433d34ee3e'})  2025-05-13 19:55:21.948251 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55d6de5b-857a-5090-90bd-6b26b006e6c2', 'data_vg': 'ceph-55d6de5b-857a-5090-90bd-6b26b006e6c2'})  2025-05-13 19:55:21.948533 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:21.948731 | orchestrator | 2025-05-13 19:55:21.949186 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-13 19:55:21.949616 | orchestrator | Tuesday 13 May 2025 19:55:21 +0000 (0:00:00.140) 0:00:14.506 *********** 2025-05-13 19:55:22.084469 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:55:22.084574 | orchestrator | 2025-05-13 19:55:22.085516 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-13 19:55:22.086632 | orchestrator | Tuesday 13 May 2025 19:55:22 +0000 (0:00:00.135) 0:00:14.641 *********** 2025-05-13 19:55:22.242009 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eb14b8c1-d757-5b78-a398-3e433d34ee3e', 'data_vg': 'ceph-eb14b8c1-d757-5b78-a398-3e433d34ee3e'})  2025-05-13 19:55:22.243421 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55d6de5b-857a-5090-90bd-6b26b006e6c2', 'data_vg': 'ceph-55d6de5b-857a-5090-90bd-6b26b006e6c2'})  2025-05-13 19:55:22.244725 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:22.248293 | orchestrator | 2025-05-13 19:55:22.249864 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-13 19:55:22.249892 | orchestrator | Tuesday 13 May 2025 19:55:22 +0000 (0:00:00.159) 0:00:14.800 *********** 2025-05-13 19:55:22.393622 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eb14b8c1-d757-5b78-a398-3e433d34ee3e', 'data_vg': 'ceph-eb14b8c1-d757-5b78-a398-3e433d34ee3e'})  2025-05-13 19:55:22.393798 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55d6de5b-857a-5090-90bd-6b26b006e6c2', 'data_vg': 'ceph-55d6de5b-857a-5090-90bd-6b26b006e6c2'})  2025-05-13 19:55:22.393835 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:22.393944 | orchestrator | 2025-05-13 19:55:22.394655 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-13 19:55:22.395330 | orchestrator | Tuesday 13 May 2025 19:55:22 +0000 (0:00:00.146) 0:00:14.947 *********** 2025-05-13 19:55:22.537065 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eb14b8c1-d757-5b78-a398-3e433d34ee3e', 'data_vg': 'ceph-eb14b8c1-d757-5b78-a398-3e433d34ee3e'})  2025-05-13 19:55:22.539282 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55d6de5b-857a-5090-90bd-6b26b006e6c2', 'data_vg': 'ceph-55d6de5b-857a-5090-90bd-6b26b006e6c2'})  2025-05-13 19:55:22.539594 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:22.540489 | orchestrator | 2025-05-13 19:55:22.541153 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-13 19:55:22.541508 | orchestrator | Tuesday 13 May 2025 19:55:22 +0000 (0:00:00.147) 0:00:15.095 *********** 2025-05-13 19:55:22.667383 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:22.669514 | orchestrator | 2025-05-13 19:55:22.672242 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-13 19:55:22.672957 | orchestrator | Tuesday 13 May 2025 19:55:22 +0000 (0:00:00.130) 0:00:15.225 *********** 2025-05-13 19:55:22.798432 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:22.800810 | orchestrator | 2025-05-13 19:55:22.800859 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-13 19:55:22.800873 | orchestrator | Tuesday 13 May 2025 19:55:22 +0000 (0:00:00.126) 0:00:15.352 *********** 2025-05-13 19:55:22.925457 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:22.925586 | orchestrator | 2025-05-13 19:55:22.925601 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-13 19:55:22.925704 | orchestrator | Tuesday 13 May 2025 19:55:22 +0000 (0:00:00.129) 0:00:15.482 *********** 2025-05-13 19:55:23.265567 | orchestrator | ok: [testbed-node-3] => { 2025-05-13 19:55:23.266582 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-13 19:55:23.267262 | orchestrator | } 2025-05-13 19:55:23.268702 | orchestrator | 2025-05-13 19:55:23.269638 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-13 19:55:23.270730 | orchestrator | Tuesday 13 May 2025 19:55:23 +0000 (0:00:00.341) 0:00:15.823 *********** 2025-05-13 19:55:23.422307 | orchestrator | ok: [testbed-node-3] => { 2025-05-13 19:55:23.424932 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-13 19:55:23.424965 | orchestrator | } 2025-05-13 19:55:23.425600 | orchestrator | 2025-05-13 19:55:23.426129 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-13 19:55:23.427137 | orchestrator | Tuesday 13 May 2025 19:55:23 +0000 (0:00:00.155) 0:00:15.979 *********** 2025-05-13 19:55:23.556245 | orchestrator | ok: [testbed-node-3] => { 2025-05-13 19:55:23.556341 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-13 19:55:23.558169 | orchestrator | } 2025-05-13 19:55:23.559483 | orchestrator | 2025-05-13 19:55:23.559508 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-13 19:55:23.560288 | orchestrator | Tuesday 13 May 2025 19:55:23 +0000 (0:00:00.133) 0:00:16.113 *********** 2025-05-13 19:55:24.209998 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:55:24.211859 | orchestrator | 2025-05-13 19:55:24.212430 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-13 19:55:24.212900 | orchestrator | Tuesday 13 May 2025 19:55:24 +0000 (0:00:00.654) 0:00:16.768 *********** 2025-05-13 19:55:24.724780 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:55:24.724892 | orchestrator | 2025-05-13 19:55:24.724972 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-13 19:55:24.725737 | orchestrator | Tuesday 13 May 2025 19:55:24 +0000 (0:00:00.510) 0:00:17.278 *********** 2025-05-13 19:55:25.215813 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:55:25.215925 | orchestrator | 2025-05-13 19:55:25.217911 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-13 19:55:25.218573 | orchestrator | Tuesday 13 May 2025 19:55:25 +0000 (0:00:00.495) 0:00:17.773 *********** 2025-05-13 19:55:25.366898 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:55:25.367648 | orchestrator | 2025-05-13 19:55:25.368535 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-13 19:55:25.369486 | orchestrator | Tuesday 13 May 2025 19:55:25 +0000 (0:00:00.149) 0:00:17.923 *********** 2025-05-13 19:55:25.469851 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:25.471450 | orchestrator | 2025-05-13 19:55:25.472576 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-13 19:55:25.474230 | orchestrator | Tuesday 13 May 2025 19:55:25 +0000 (0:00:00.104) 0:00:18.028 *********** 2025-05-13 19:55:25.590218 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:25.590389 | orchestrator | 2025-05-13 19:55:25.591963 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-13 19:55:25.592809 | orchestrator | Tuesday 13 May 2025 19:55:25 +0000 (0:00:00.119) 0:00:18.147 *********** 2025-05-13 19:55:25.736211 | orchestrator | ok: [testbed-node-3] => { 2025-05-13 19:55:25.736381 | orchestrator |  "vgs_report": { 2025-05-13 19:55:25.739586 | orchestrator |  "vg": [] 2025-05-13 19:55:25.740349 | orchestrator |  } 2025-05-13 19:55:25.741295 | orchestrator | } 2025-05-13 19:55:25.742454 | orchestrator | 2025-05-13 19:55:25.743061 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-13 19:55:25.743599 | orchestrator | Tuesday 13 May 2025 19:55:25 +0000 (0:00:00.145) 0:00:18.292 *********** 2025-05-13 19:55:25.866615 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:25.866717 | orchestrator | 2025-05-13 19:55:25.867650 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-13 19:55:25.868623 | orchestrator | Tuesday 13 May 2025 19:55:25 +0000 (0:00:00.131) 0:00:18.424 *********** 2025-05-13 19:55:25.999573 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:25.999900 | orchestrator | 2025-05-13 19:55:26.001162 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-13 19:55:26.001834 | orchestrator | Tuesday 13 May 2025 19:55:25 +0000 (0:00:00.133) 0:00:18.557 *********** 2025-05-13 19:55:26.345815 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:26.346166 | orchestrator | 2025-05-13 19:55:26.347412 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-13 19:55:26.347632 | orchestrator | Tuesday 13 May 2025 19:55:26 +0000 (0:00:00.346) 0:00:18.904 *********** 2025-05-13 19:55:26.476981 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:26.477731 | orchestrator | 2025-05-13 19:55:26.478945 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-13 19:55:26.480183 | orchestrator | Tuesday 13 May 2025 19:55:26 +0000 (0:00:00.131) 0:00:19.035 *********** 2025-05-13 19:55:26.610322 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:26.610426 | orchestrator | 2025-05-13 19:55:26.611170 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-13 19:55:26.611750 | orchestrator | Tuesday 13 May 2025 19:55:26 +0000 (0:00:00.126) 0:00:19.162 *********** 2025-05-13 19:55:26.737964 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:26.738304 | orchestrator | 2025-05-13 19:55:26.739500 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-13 19:55:26.740345 | orchestrator | Tuesday 13 May 2025 19:55:26 +0000 (0:00:00.133) 0:00:19.295 *********** 2025-05-13 19:55:26.883495 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:26.884251 | orchestrator | 2025-05-13 19:55:26.885152 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-13 19:55:26.885909 | orchestrator | Tuesday 13 May 2025 19:55:26 +0000 (0:00:00.143) 0:00:19.439 *********** 2025-05-13 19:55:27.020135 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:27.020523 | orchestrator | 2025-05-13 19:55:27.022225 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-13 19:55:27.022961 | orchestrator | Tuesday 13 May 2025 19:55:27 +0000 (0:00:00.139) 0:00:19.578 *********** 2025-05-13 19:55:27.146548 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:27.146801 | orchestrator | 2025-05-13 19:55:27.147480 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-13 19:55:27.148486 | orchestrator | Tuesday 13 May 2025 19:55:27 +0000 (0:00:00.125) 0:00:19.704 *********** 2025-05-13 19:55:27.292328 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:27.292456 | orchestrator | 2025-05-13 19:55:27.293477 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-13 19:55:27.293689 | orchestrator | Tuesday 13 May 2025 19:55:27 +0000 (0:00:00.143) 0:00:19.847 *********** 2025-05-13 19:55:27.437496 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:27.437674 | orchestrator | 2025-05-13 19:55:27.438883 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-13 19:55:27.440110 | orchestrator | Tuesday 13 May 2025 19:55:27 +0000 (0:00:00.146) 0:00:19.994 *********** 2025-05-13 19:55:27.578388 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:27.579936 | orchestrator | 2025-05-13 19:55:27.581313 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-13 19:55:27.582541 | orchestrator | Tuesday 13 May 2025 19:55:27 +0000 (0:00:00.142) 0:00:20.136 *********** 2025-05-13 19:55:27.729878 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:27.730196 | orchestrator | 2025-05-13 19:55:27.731363 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-13 19:55:27.733261 | orchestrator | Tuesday 13 May 2025 19:55:27 +0000 (0:00:00.151) 0:00:20.287 *********** 2025-05-13 19:55:27.861488 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:27.863042 | orchestrator | 2025-05-13 19:55:27.863931 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-13 19:55:27.865194 | orchestrator | Tuesday 13 May 2025 19:55:27 +0000 (0:00:00.131) 0:00:20.419 *********** 2025-05-13 19:55:28.231992 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eb14b8c1-d757-5b78-a398-3e433d34ee3e', 'data_vg': 'ceph-eb14b8c1-d757-5b78-a398-3e433d34ee3e'})  2025-05-13 19:55:28.232729 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55d6de5b-857a-5090-90bd-6b26b006e6c2', 'data_vg': 'ceph-55d6de5b-857a-5090-90bd-6b26b006e6c2'})  2025-05-13 19:55:28.234011 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:28.235816 | orchestrator | 2025-05-13 19:55:28.235841 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-13 19:55:28.236481 | orchestrator | Tuesday 13 May 2025 19:55:28 +0000 (0:00:00.371) 0:00:20.790 *********** 2025-05-13 19:55:28.377015 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eb14b8c1-d757-5b78-a398-3e433d34ee3e', 'data_vg': 'ceph-eb14b8c1-d757-5b78-a398-3e433d34ee3e'})  2025-05-13 19:55:28.379035 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55d6de5b-857a-5090-90bd-6b26b006e6c2', 'data_vg': 'ceph-55d6de5b-857a-5090-90bd-6b26b006e6c2'})  2025-05-13 19:55:28.379375 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:28.379963 | orchestrator | 2025-05-13 19:55:28.382140 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-13 19:55:28.382953 | orchestrator | Tuesday 13 May 2025 19:55:28 +0000 (0:00:00.143) 0:00:20.934 *********** 2025-05-13 19:55:28.528433 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eb14b8c1-d757-5b78-a398-3e433d34ee3e', 'data_vg': 'ceph-eb14b8c1-d757-5b78-a398-3e433d34ee3e'})  2025-05-13 19:55:28.529296 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55d6de5b-857a-5090-90bd-6b26b006e6c2', 'data_vg': 'ceph-55d6de5b-857a-5090-90bd-6b26b006e6c2'})  2025-05-13 19:55:28.530358 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:28.531208 | orchestrator | 2025-05-13 19:55:28.532362 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-13 19:55:28.533272 | orchestrator | Tuesday 13 May 2025 19:55:28 +0000 (0:00:00.152) 0:00:21.086 *********** 2025-05-13 19:55:28.681949 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eb14b8c1-d757-5b78-a398-3e433d34ee3e', 'data_vg': 'ceph-eb14b8c1-d757-5b78-a398-3e433d34ee3e'})  2025-05-13 19:55:28.682268 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55d6de5b-857a-5090-90bd-6b26b006e6c2', 'data_vg': 'ceph-55d6de5b-857a-5090-90bd-6b26b006e6c2'})  2025-05-13 19:55:28.683766 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:28.683815 | orchestrator | 2025-05-13 19:55:28.684331 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-13 19:55:28.685304 | orchestrator | Tuesday 13 May 2025 19:55:28 +0000 (0:00:00.153) 0:00:21.240 *********** 2025-05-13 19:55:28.845316 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eb14b8c1-d757-5b78-a398-3e433d34ee3e', 'data_vg': 'ceph-eb14b8c1-d757-5b78-a398-3e433d34ee3e'})  2025-05-13 19:55:28.846480 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55d6de5b-857a-5090-90bd-6b26b006e6c2', 'data_vg': 'ceph-55d6de5b-857a-5090-90bd-6b26b006e6c2'})  2025-05-13 19:55:28.849240 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:28.849292 | orchestrator | 2025-05-13 19:55:28.849305 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-13 19:55:28.849818 | orchestrator | Tuesday 13 May 2025 19:55:28 +0000 (0:00:00.163) 0:00:21.403 *********** 2025-05-13 19:55:29.011818 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eb14b8c1-d757-5b78-a398-3e433d34ee3e', 'data_vg': 'ceph-eb14b8c1-d757-5b78-a398-3e433d34ee3e'})  2025-05-13 19:55:29.012036 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55d6de5b-857a-5090-90bd-6b26b006e6c2', 'data_vg': 'ceph-55d6de5b-857a-5090-90bd-6b26b006e6c2'})  2025-05-13 19:55:29.013213 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:29.014612 | orchestrator | 2025-05-13 19:55:29.015417 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-13 19:55:29.016754 | orchestrator | Tuesday 13 May 2025 19:55:29 +0000 (0:00:00.166) 0:00:21.570 *********** 2025-05-13 19:55:29.167245 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eb14b8c1-d757-5b78-a398-3e433d34ee3e', 'data_vg': 'ceph-eb14b8c1-d757-5b78-a398-3e433d34ee3e'})  2025-05-13 19:55:29.168010 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55d6de5b-857a-5090-90bd-6b26b006e6c2', 'data_vg': 'ceph-55d6de5b-857a-5090-90bd-6b26b006e6c2'})  2025-05-13 19:55:29.168826 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:29.169796 | orchestrator | 2025-05-13 19:55:29.170875 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-13 19:55:29.171973 | orchestrator | Tuesday 13 May 2025 19:55:29 +0000 (0:00:00.155) 0:00:21.725 *********** 2025-05-13 19:55:29.325989 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eb14b8c1-d757-5b78-a398-3e433d34ee3e', 'data_vg': 'ceph-eb14b8c1-d757-5b78-a398-3e433d34ee3e'})  2025-05-13 19:55:29.326176 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55d6de5b-857a-5090-90bd-6b26b006e6c2', 'data_vg': 'ceph-55d6de5b-857a-5090-90bd-6b26b006e6c2'})  2025-05-13 19:55:29.329045 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:29.330115 | orchestrator | 2025-05-13 19:55:29.331115 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-13 19:55:29.331914 | orchestrator | Tuesday 13 May 2025 19:55:29 +0000 (0:00:00.155) 0:00:21.880 *********** 2025-05-13 19:55:29.822572 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:55:29.822804 | orchestrator | 2025-05-13 19:55:29.824257 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-13 19:55:29.824713 | orchestrator | Tuesday 13 May 2025 19:55:29 +0000 (0:00:00.497) 0:00:22.378 *********** 2025-05-13 19:55:30.329543 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:55:30.330220 | orchestrator | 2025-05-13 19:55:30.331389 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-13 19:55:30.333315 | orchestrator | Tuesday 13 May 2025 19:55:30 +0000 (0:00:00.509) 0:00:22.888 *********** 2025-05-13 19:55:30.481279 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:55:30.482429 | orchestrator | 2025-05-13 19:55:30.483493 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-13 19:55:30.484872 | orchestrator | Tuesday 13 May 2025 19:55:30 +0000 (0:00:00.150) 0:00:23.039 *********** 2025-05-13 19:55:30.648227 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-55d6de5b-857a-5090-90bd-6b26b006e6c2', 'vg_name': 'ceph-55d6de5b-857a-5090-90bd-6b26b006e6c2'}) 2025-05-13 19:55:30.648381 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-eb14b8c1-d757-5b78-a398-3e433d34ee3e', 'vg_name': 'ceph-eb14b8c1-d757-5b78-a398-3e433d34ee3e'}) 2025-05-13 19:55:30.649867 | orchestrator | 2025-05-13 19:55:30.651025 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-13 19:55:30.654409 | orchestrator | Tuesday 13 May 2025 19:55:30 +0000 (0:00:00.167) 0:00:23.206 *********** 2025-05-13 19:55:31.042144 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eb14b8c1-d757-5b78-a398-3e433d34ee3e', 'data_vg': 'ceph-eb14b8c1-d757-5b78-a398-3e433d34ee3e'})  2025-05-13 19:55:31.042681 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55d6de5b-857a-5090-90bd-6b26b006e6c2', 'data_vg': 'ceph-55d6de5b-857a-5090-90bd-6b26b006e6c2'})  2025-05-13 19:55:31.043391 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:31.044424 | orchestrator | 2025-05-13 19:55:31.046108 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-13 19:55:31.046134 | orchestrator | Tuesday 13 May 2025 19:55:31 +0000 (0:00:00.392) 0:00:23.599 *********** 2025-05-13 19:55:31.198753 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eb14b8c1-d757-5b78-a398-3e433d34ee3e', 'data_vg': 'ceph-eb14b8c1-d757-5b78-a398-3e433d34ee3e'})  2025-05-13 19:55:31.198851 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55d6de5b-857a-5090-90bd-6b26b006e6c2', 'data_vg': 'ceph-55d6de5b-857a-5090-90bd-6b26b006e6c2'})  2025-05-13 19:55:31.199634 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:31.200459 | orchestrator | 2025-05-13 19:55:31.202435 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-13 19:55:31.202928 | orchestrator | Tuesday 13 May 2025 19:55:31 +0000 (0:00:00.157) 0:00:23.757 *********** 2025-05-13 19:55:31.369554 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-eb14b8c1-d757-5b78-a398-3e433d34ee3e', 'data_vg': 'ceph-eb14b8c1-d757-5b78-a398-3e433d34ee3e'})  2025-05-13 19:55:31.369730 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-55d6de5b-857a-5090-90bd-6b26b006e6c2', 'data_vg': 'ceph-55d6de5b-857a-5090-90bd-6b26b006e6c2'})  2025-05-13 19:55:31.369811 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:55:31.370377 | orchestrator | 2025-05-13 19:55:31.371032 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-13 19:55:31.371880 | orchestrator | Tuesday 13 May 2025 19:55:31 +0000 (0:00:00.165) 0:00:23.922 *********** 2025-05-13 19:55:31.645780 | orchestrator | ok: [testbed-node-3] => { 2025-05-13 19:55:31.647133 | orchestrator |  "lvm_report": { 2025-05-13 19:55:31.648091 | orchestrator |  "lv": [ 2025-05-13 19:55:31.649425 | orchestrator |  { 2025-05-13 19:55:31.649909 | orchestrator |  "lv_name": "osd-block-55d6de5b-857a-5090-90bd-6b26b006e6c2", 2025-05-13 19:55:31.651059 | orchestrator |  "vg_name": "ceph-55d6de5b-857a-5090-90bd-6b26b006e6c2" 2025-05-13 19:55:31.651822 | orchestrator |  }, 2025-05-13 19:55:31.652563 | orchestrator |  { 2025-05-13 19:55:31.653141 | orchestrator |  "lv_name": "osd-block-eb14b8c1-d757-5b78-a398-3e433d34ee3e", 2025-05-13 19:55:31.653765 | orchestrator |  "vg_name": "ceph-eb14b8c1-d757-5b78-a398-3e433d34ee3e" 2025-05-13 19:55:31.654576 | orchestrator |  } 2025-05-13 19:55:31.655195 | orchestrator |  ], 2025-05-13 19:55:31.655832 | orchestrator |  "pv": [ 2025-05-13 19:55:31.656691 | orchestrator |  { 2025-05-13 19:55:31.657128 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-13 19:55:31.657781 | orchestrator |  "vg_name": "ceph-eb14b8c1-d757-5b78-a398-3e433d34ee3e" 2025-05-13 19:55:31.658326 | orchestrator |  }, 2025-05-13 19:55:31.658732 | orchestrator |  { 2025-05-13 19:55:31.659253 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-13 19:55:31.659650 | orchestrator |  "vg_name": "ceph-55d6de5b-857a-5090-90bd-6b26b006e6c2" 2025-05-13 19:55:31.660162 | orchestrator |  } 2025-05-13 19:55:31.660503 | orchestrator |  ] 2025-05-13 19:55:31.660846 | orchestrator |  } 2025-05-13 19:55:31.661576 | orchestrator | } 2025-05-13 19:55:31.662060 | orchestrator | 2025-05-13 19:55:31.662490 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-13 19:55:31.662928 | orchestrator | 2025-05-13 19:55:31.663366 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-13 19:55:31.663830 | orchestrator | Tuesday 13 May 2025 19:55:31 +0000 (0:00:00.281) 0:00:24.203 *********** 2025-05-13 19:55:31.893376 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-13 19:55:31.894115 | orchestrator | 2025-05-13 19:55:31.894355 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-13 19:55:31.894812 | orchestrator | Tuesday 13 May 2025 19:55:31 +0000 (0:00:00.248) 0:00:24.452 *********** 2025-05-13 19:55:32.122384 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:55:32.123414 | orchestrator | 2025-05-13 19:55:32.123730 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:32.125011 | orchestrator | Tuesday 13 May 2025 19:55:32 +0000 (0:00:00.228) 0:00:24.680 *********** 2025-05-13 19:55:32.543281 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-13 19:55:32.544156 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-13 19:55:32.545961 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-13 19:55:32.546611 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-13 19:55:32.547518 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-13 19:55:32.548835 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-13 19:55:32.549913 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-13 19:55:32.550676 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-13 19:55:32.551385 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-13 19:55:32.551974 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-13 19:55:32.552655 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-13 19:55:32.553505 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-13 19:55:32.554158 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-13 19:55:32.554626 | orchestrator | 2025-05-13 19:55:32.555234 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:32.555785 | orchestrator | Tuesday 13 May 2025 19:55:32 +0000 (0:00:00.419) 0:00:25.100 *********** 2025-05-13 19:55:32.726278 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:32.727144 | orchestrator | 2025-05-13 19:55:32.727703 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:32.728478 | orchestrator | Tuesday 13 May 2025 19:55:32 +0000 (0:00:00.184) 0:00:25.285 *********** 2025-05-13 19:55:32.922761 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:32.923968 | orchestrator | 2025-05-13 19:55:32.925114 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:32.926204 | orchestrator | Tuesday 13 May 2025 19:55:32 +0000 (0:00:00.196) 0:00:25.481 *********** 2025-05-13 19:55:33.504637 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:33.506956 | orchestrator | 2025-05-13 19:55:33.508832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:33.509787 | orchestrator | Tuesday 13 May 2025 19:55:33 +0000 (0:00:00.581) 0:00:26.062 *********** 2025-05-13 19:55:33.724018 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:33.724704 | orchestrator | 2025-05-13 19:55:33.725548 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:33.725692 | orchestrator | Tuesday 13 May 2025 19:55:33 +0000 (0:00:00.220) 0:00:26.282 *********** 2025-05-13 19:55:33.923756 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:33.924785 | orchestrator | 2025-05-13 19:55:33.925415 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:33.925993 | orchestrator | Tuesday 13 May 2025 19:55:33 +0000 (0:00:00.199) 0:00:26.482 *********** 2025-05-13 19:55:34.123955 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:34.124907 | orchestrator | 2025-05-13 19:55:34.125721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:34.126861 | orchestrator | Tuesday 13 May 2025 19:55:34 +0000 (0:00:00.198) 0:00:26.681 *********** 2025-05-13 19:55:34.324323 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:34.324830 | orchestrator | 2025-05-13 19:55:34.325424 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:34.325729 | orchestrator | Tuesday 13 May 2025 19:55:34 +0000 (0:00:00.201) 0:00:26.882 *********** 2025-05-13 19:55:34.539804 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:34.540251 | orchestrator | 2025-05-13 19:55:34.541380 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:34.542809 | orchestrator | Tuesday 13 May 2025 19:55:34 +0000 (0:00:00.214) 0:00:27.097 *********** 2025-05-13 19:55:34.952293 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2) 2025-05-13 19:55:34.952441 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2) 2025-05-13 19:55:34.953493 | orchestrator | 2025-05-13 19:55:34.954075 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:34.954909 | orchestrator | Tuesday 13 May 2025 19:55:34 +0000 (0:00:00.413) 0:00:27.511 *********** 2025-05-13 19:55:35.385296 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e87b71fc-701a-46cb-bbd9-3f15f37c3043) 2025-05-13 19:55:35.385533 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e87b71fc-701a-46cb-bbd9-3f15f37c3043) 2025-05-13 19:55:35.386726 | orchestrator | 2025-05-13 19:55:35.387420 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:35.388314 | orchestrator | Tuesday 13 May 2025 19:55:35 +0000 (0:00:00.432) 0:00:27.943 *********** 2025-05-13 19:55:35.826942 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_97094a75-4993-40db-897e-adadcd017b36) 2025-05-13 19:55:35.827955 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_97094a75-4993-40db-897e-adadcd017b36) 2025-05-13 19:55:35.829889 | orchestrator | 2025-05-13 19:55:35.830418 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:35.831873 | orchestrator | Tuesday 13 May 2025 19:55:35 +0000 (0:00:00.440) 0:00:28.384 *********** 2025-05-13 19:55:36.271210 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9d4a667e-1daa-4ea2-845b-5122e74908eb) 2025-05-13 19:55:36.271332 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9d4a667e-1daa-4ea2-845b-5122e74908eb) 2025-05-13 19:55:36.271827 | orchestrator | 2025-05-13 19:55:36.274365 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:36.274775 | orchestrator | Tuesday 13 May 2025 19:55:36 +0000 (0:00:00.444) 0:00:28.829 *********** 2025-05-13 19:55:36.581401 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-13 19:55:36.581843 | orchestrator | 2025-05-13 19:55:36.582206 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:36.582567 | orchestrator | Tuesday 13 May 2025 19:55:36 +0000 (0:00:00.310) 0:00:29.139 *********** 2025-05-13 19:55:37.242328 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-13 19:55:37.242454 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-13 19:55:37.243542 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-13 19:55:37.245146 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-13 19:55:37.245221 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-13 19:55:37.246392 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-13 19:55:37.246662 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-13 19:55:37.247280 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-13 19:55:37.247849 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-13 19:55:37.249406 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-13 19:55:37.250310 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-13 19:55:37.250866 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-13 19:55:37.252711 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-13 19:55:37.252732 | orchestrator | 2025-05-13 19:55:37.253042 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:37.253906 | orchestrator | Tuesday 13 May 2025 19:55:37 +0000 (0:00:00.658) 0:00:29.798 *********** 2025-05-13 19:55:37.462523 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:37.462633 | orchestrator | 2025-05-13 19:55:37.462645 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:37.462656 | orchestrator | Tuesday 13 May 2025 19:55:37 +0000 (0:00:00.219) 0:00:30.017 *********** 2025-05-13 19:55:37.667666 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:37.667877 | orchestrator | 2025-05-13 19:55:37.668661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:37.669499 | orchestrator | Tuesday 13 May 2025 19:55:37 +0000 (0:00:00.204) 0:00:30.222 *********** 2025-05-13 19:55:37.863724 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:37.864388 | orchestrator | 2025-05-13 19:55:37.864906 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:37.865433 | orchestrator | Tuesday 13 May 2025 19:55:37 +0000 (0:00:00.199) 0:00:30.422 *********** 2025-05-13 19:55:38.063904 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:38.064232 | orchestrator | 2025-05-13 19:55:38.065486 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:38.066131 | orchestrator | Tuesday 13 May 2025 19:55:38 +0000 (0:00:00.199) 0:00:30.621 *********** 2025-05-13 19:55:38.269319 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:38.269531 | orchestrator | 2025-05-13 19:55:38.270414 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:38.271069 | orchestrator | Tuesday 13 May 2025 19:55:38 +0000 (0:00:00.205) 0:00:30.827 *********** 2025-05-13 19:55:38.481688 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:38.482486 | orchestrator | 2025-05-13 19:55:38.484640 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:38.484668 | orchestrator | Tuesday 13 May 2025 19:55:38 +0000 (0:00:00.211) 0:00:31.039 *********** 2025-05-13 19:55:38.691860 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:38.692234 | orchestrator | 2025-05-13 19:55:38.693569 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:38.693750 | orchestrator | Tuesday 13 May 2025 19:55:38 +0000 (0:00:00.208) 0:00:31.248 *********** 2025-05-13 19:55:38.916415 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:38.916753 | orchestrator | 2025-05-13 19:55:38.917761 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:38.918200 | orchestrator | Tuesday 13 May 2025 19:55:38 +0000 (0:00:00.227) 0:00:31.475 *********** 2025-05-13 19:55:39.791208 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-13 19:55:39.793587 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-13 19:55:39.794094 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-13 19:55:39.794511 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-13 19:55:39.795149 | orchestrator | 2025-05-13 19:55:39.795727 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:39.796274 | orchestrator | Tuesday 13 May 2025 19:55:39 +0000 (0:00:00.872) 0:00:32.347 *********** 2025-05-13 19:55:40.003007 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:40.004592 | orchestrator | 2025-05-13 19:55:40.005737 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:40.006351 | orchestrator | Tuesday 13 May 2025 19:55:39 +0000 (0:00:00.214) 0:00:32.561 *********** 2025-05-13 19:55:40.199625 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:40.201148 | orchestrator | 2025-05-13 19:55:40.202289 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:40.203385 | orchestrator | Tuesday 13 May 2025 19:55:40 +0000 (0:00:00.196) 0:00:32.758 *********** 2025-05-13 19:55:40.849206 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:40.851361 | orchestrator | 2025-05-13 19:55:40.851836 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:55:40.852907 | orchestrator | Tuesday 13 May 2025 19:55:40 +0000 (0:00:00.646) 0:00:33.405 *********** 2025-05-13 19:55:41.047729 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:41.047958 | orchestrator | 2025-05-13 19:55:41.050211 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-13 19:55:41.051767 | orchestrator | Tuesday 13 May 2025 19:55:41 +0000 (0:00:00.199) 0:00:33.604 *********** 2025-05-13 19:55:41.199455 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:41.199560 | orchestrator | 2025-05-13 19:55:41.200249 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-13 19:55:41.201093 | orchestrator | Tuesday 13 May 2025 19:55:41 +0000 (0:00:00.153) 0:00:33.758 *********** 2025-05-13 19:55:41.421254 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c7ef241c-3ce4-53e3-9962-a0236c38cab6'}}) 2025-05-13 19:55:41.421461 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '53409cd5-715f-5221-bc58-8adc9fe4a6bc'}}) 2025-05-13 19:55:41.421552 | orchestrator | 2025-05-13 19:55:41.422838 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-13 19:55:41.423735 | orchestrator | Tuesday 13 May 2025 19:55:41 +0000 (0:00:00.221) 0:00:33.979 *********** 2025-05-13 19:55:43.337337 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c7ef241c-3ce4-53e3-9962-a0236c38cab6', 'data_vg': 'ceph-c7ef241c-3ce4-53e3-9962-a0236c38cab6'}) 2025-05-13 19:55:43.338128 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-53409cd5-715f-5221-bc58-8adc9fe4a6bc', 'data_vg': 'ceph-53409cd5-715f-5221-bc58-8adc9fe4a6bc'}) 2025-05-13 19:55:43.339687 | orchestrator | 2025-05-13 19:55:43.339803 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-13 19:55:43.340719 | orchestrator | Tuesday 13 May 2025 19:55:43 +0000 (0:00:01.914) 0:00:35.894 *********** 2025-05-13 19:55:43.482890 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7ef241c-3ce4-53e3-9962-a0236c38cab6', 'data_vg': 'ceph-c7ef241c-3ce4-53e3-9962-a0236c38cab6'})  2025-05-13 19:55:43.483316 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53409cd5-715f-5221-bc58-8adc9fe4a6bc', 'data_vg': 'ceph-53409cd5-715f-5221-bc58-8adc9fe4a6bc'})  2025-05-13 19:55:43.483995 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:43.484019 | orchestrator | 2025-05-13 19:55:43.484466 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-13 19:55:43.484957 | orchestrator | Tuesday 13 May 2025 19:55:43 +0000 (0:00:00.147) 0:00:36.042 *********** 2025-05-13 19:55:44.795188 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c7ef241c-3ce4-53e3-9962-a0236c38cab6', 'data_vg': 'ceph-c7ef241c-3ce4-53e3-9962-a0236c38cab6'}) 2025-05-13 19:55:44.796268 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-53409cd5-715f-5221-bc58-8adc9fe4a6bc', 'data_vg': 'ceph-53409cd5-715f-5221-bc58-8adc9fe4a6bc'}) 2025-05-13 19:55:44.798414 | orchestrator | 2025-05-13 19:55:44.798441 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-13 19:55:44.799674 | orchestrator | Tuesday 13 May 2025 19:55:44 +0000 (0:00:01.310) 0:00:37.352 *********** 2025-05-13 19:55:44.964134 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7ef241c-3ce4-53e3-9962-a0236c38cab6', 'data_vg': 'ceph-c7ef241c-3ce4-53e3-9962-a0236c38cab6'})  2025-05-13 19:55:44.964238 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53409cd5-715f-5221-bc58-8adc9fe4a6bc', 'data_vg': 'ceph-53409cd5-715f-5221-bc58-8adc9fe4a6bc'})  2025-05-13 19:55:44.964810 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:44.965468 | orchestrator | 2025-05-13 19:55:44.966236 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-13 19:55:44.968616 | orchestrator | Tuesday 13 May 2025 19:55:44 +0000 (0:00:00.169) 0:00:37.522 *********** 2025-05-13 19:55:45.090481 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:45.091377 | orchestrator | 2025-05-13 19:55:45.092730 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-13 19:55:45.094574 | orchestrator | Tuesday 13 May 2025 19:55:45 +0000 (0:00:00.127) 0:00:37.649 *********** 2025-05-13 19:55:45.236599 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7ef241c-3ce4-53e3-9962-a0236c38cab6', 'data_vg': 'ceph-c7ef241c-3ce4-53e3-9962-a0236c38cab6'})  2025-05-13 19:55:45.236812 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53409cd5-715f-5221-bc58-8adc9fe4a6bc', 'data_vg': 'ceph-53409cd5-715f-5221-bc58-8adc9fe4a6bc'})  2025-05-13 19:55:45.238571 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:45.240860 | orchestrator | 2025-05-13 19:55:45.240882 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-13 19:55:45.245223 | orchestrator | Tuesday 13 May 2025 19:55:45 +0000 (0:00:00.146) 0:00:37.796 *********** 2025-05-13 19:55:45.369216 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:45.369424 | orchestrator | 2025-05-13 19:55:45.370429 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-13 19:55:45.371267 | orchestrator | Tuesday 13 May 2025 19:55:45 +0000 (0:00:00.131) 0:00:37.927 *********** 2025-05-13 19:55:45.529288 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7ef241c-3ce4-53e3-9962-a0236c38cab6', 'data_vg': 'ceph-c7ef241c-3ce4-53e3-9962-a0236c38cab6'})  2025-05-13 19:55:45.529445 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53409cd5-715f-5221-bc58-8adc9fe4a6bc', 'data_vg': 'ceph-53409cd5-715f-5221-bc58-8adc9fe4a6bc'})  2025-05-13 19:55:45.531383 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:45.533676 | orchestrator | 2025-05-13 19:55:45.533891 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-13 19:55:45.534610 | orchestrator | Tuesday 13 May 2025 19:55:45 +0000 (0:00:00.157) 0:00:38.085 *********** 2025-05-13 19:55:45.853164 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:45.853679 | orchestrator | 2025-05-13 19:55:45.855421 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-13 19:55:45.857333 | orchestrator | Tuesday 13 May 2025 19:55:45 +0000 (0:00:00.326) 0:00:38.411 *********** 2025-05-13 19:55:46.016483 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7ef241c-3ce4-53e3-9962-a0236c38cab6', 'data_vg': 'ceph-c7ef241c-3ce4-53e3-9962-a0236c38cab6'})  2025-05-13 19:55:46.019194 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53409cd5-715f-5221-bc58-8adc9fe4a6bc', 'data_vg': 'ceph-53409cd5-715f-5221-bc58-8adc9fe4a6bc'})  2025-05-13 19:55:46.022829 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:46.023656 | orchestrator | 2025-05-13 19:55:46.024235 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-13 19:55:46.024570 | orchestrator | Tuesday 13 May 2025 19:55:46 +0000 (0:00:00.160) 0:00:38.572 *********** 2025-05-13 19:55:46.163306 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:55:46.164419 | orchestrator | 2025-05-13 19:55:46.165300 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-13 19:55:46.166519 | orchestrator | Tuesday 13 May 2025 19:55:46 +0000 (0:00:00.148) 0:00:38.720 *********** 2025-05-13 19:55:46.308513 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7ef241c-3ce4-53e3-9962-a0236c38cab6', 'data_vg': 'ceph-c7ef241c-3ce4-53e3-9962-a0236c38cab6'})  2025-05-13 19:55:46.310497 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53409cd5-715f-5221-bc58-8adc9fe4a6bc', 'data_vg': 'ceph-53409cd5-715f-5221-bc58-8adc9fe4a6bc'})  2025-05-13 19:55:46.311169 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:46.312300 | orchestrator | 2025-05-13 19:55:46.313571 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-13 19:55:46.314824 | orchestrator | Tuesday 13 May 2025 19:55:46 +0000 (0:00:00.146) 0:00:38.867 *********** 2025-05-13 19:55:46.455820 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7ef241c-3ce4-53e3-9962-a0236c38cab6', 'data_vg': 'ceph-c7ef241c-3ce4-53e3-9962-a0236c38cab6'})  2025-05-13 19:55:46.456017 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53409cd5-715f-5221-bc58-8adc9fe4a6bc', 'data_vg': 'ceph-53409cd5-715f-5221-bc58-8adc9fe4a6bc'})  2025-05-13 19:55:46.458413 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:46.459504 | orchestrator | 2025-05-13 19:55:46.460714 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-13 19:55:46.461999 | orchestrator | Tuesday 13 May 2025 19:55:46 +0000 (0:00:00.147) 0:00:39.014 *********** 2025-05-13 19:55:46.588722 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7ef241c-3ce4-53e3-9962-a0236c38cab6', 'data_vg': 'ceph-c7ef241c-3ce4-53e3-9962-a0236c38cab6'})  2025-05-13 19:55:46.589404 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53409cd5-715f-5221-bc58-8adc9fe4a6bc', 'data_vg': 'ceph-53409cd5-715f-5221-bc58-8adc9fe4a6bc'})  2025-05-13 19:55:46.590493 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:46.591824 | orchestrator | 2025-05-13 19:55:46.592633 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-13 19:55:46.593299 | orchestrator | Tuesday 13 May 2025 19:55:46 +0000 (0:00:00.132) 0:00:39.147 *********** 2025-05-13 19:55:46.716012 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:46.717568 | orchestrator | 2025-05-13 19:55:46.718084 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-13 19:55:46.718953 | orchestrator | Tuesday 13 May 2025 19:55:46 +0000 (0:00:00.127) 0:00:39.274 *********** 2025-05-13 19:55:46.843273 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:46.846356 | orchestrator | 2025-05-13 19:55:46.848871 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-13 19:55:46.849101 | orchestrator | Tuesday 13 May 2025 19:55:46 +0000 (0:00:00.127) 0:00:39.401 *********** 2025-05-13 19:55:46.973080 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:46.973829 | orchestrator | 2025-05-13 19:55:46.974971 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-13 19:55:46.975937 | orchestrator | Tuesday 13 May 2025 19:55:46 +0000 (0:00:00.129) 0:00:39.531 *********** 2025-05-13 19:55:47.139813 | orchestrator | ok: [testbed-node-4] => { 2025-05-13 19:55:47.140086 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-13 19:55:47.141273 | orchestrator | } 2025-05-13 19:55:47.142787 | orchestrator | 2025-05-13 19:55:47.144070 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-13 19:55:47.145305 | orchestrator | Tuesday 13 May 2025 19:55:47 +0000 (0:00:00.166) 0:00:39.697 *********** 2025-05-13 19:55:47.306600 | orchestrator | ok: [testbed-node-4] => { 2025-05-13 19:55:47.309824 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-13 19:55:47.310687 | orchestrator | } 2025-05-13 19:55:47.311309 | orchestrator | 2025-05-13 19:55:47.312522 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-13 19:55:47.313163 | orchestrator | Tuesday 13 May 2025 19:55:47 +0000 (0:00:00.167) 0:00:39.865 *********** 2025-05-13 19:55:47.443718 | orchestrator | ok: [testbed-node-4] => { 2025-05-13 19:55:47.443858 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-13 19:55:47.444784 | orchestrator | } 2025-05-13 19:55:47.445867 | orchestrator | 2025-05-13 19:55:47.446843 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-13 19:55:47.447797 | orchestrator | Tuesday 13 May 2025 19:55:47 +0000 (0:00:00.135) 0:00:40.000 *********** 2025-05-13 19:55:48.178569 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:55:48.179657 | orchestrator | 2025-05-13 19:55:48.181398 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-13 19:55:48.182768 | orchestrator | Tuesday 13 May 2025 19:55:48 +0000 (0:00:00.734) 0:00:40.735 *********** 2025-05-13 19:55:48.702600 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:55:48.702723 | orchestrator | 2025-05-13 19:55:48.704977 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-13 19:55:48.706390 | orchestrator | Tuesday 13 May 2025 19:55:48 +0000 (0:00:00.517) 0:00:41.252 *********** 2025-05-13 19:55:49.214344 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:55:49.215372 | orchestrator | 2025-05-13 19:55:49.216752 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-13 19:55:49.217825 | orchestrator | Tuesday 13 May 2025 19:55:49 +0000 (0:00:00.520) 0:00:41.772 *********** 2025-05-13 19:55:49.379993 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:55:49.380454 | orchestrator | 2025-05-13 19:55:49.381680 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-13 19:55:49.383361 | orchestrator | Tuesday 13 May 2025 19:55:49 +0000 (0:00:00.165) 0:00:41.938 *********** 2025-05-13 19:55:49.487661 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:49.488898 | orchestrator | 2025-05-13 19:55:49.490141 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-13 19:55:49.492405 | orchestrator | Tuesday 13 May 2025 19:55:49 +0000 (0:00:00.107) 0:00:42.045 *********** 2025-05-13 19:55:49.615520 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:49.615602 | orchestrator | 2025-05-13 19:55:49.617067 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-13 19:55:49.618566 | orchestrator | Tuesday 13 May 2025 19:55:49 +0000 (0:00:00.126) 0:00:42.172 *********** 2025-05-13 19:55:49.760418 | orchestrator | ok: [testbed-node-4] => { 2025-05-13 19:55:49.760533 | orchestrator |  "vgs_report": { 2025-05-13 19:55:49.760649 | orchestrator |  "vg": [] 2025-05-13 19:55:49.761609 | orchestrator |  } 2025-05-13 19:55:49.762618 | orchestrator | } 2025-05-13 19:55:49.763863 | orchestrator | 2025-05-13 19:55:49.764448 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-13 19:55:49.765286 | orchestrator | Tuesday 13 May 2025 19:55:49 +0000 (0:00:00.145) 0:00:42.318 *********** 2025-05-13 19:55:49.942573 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:49.943623 | orchestrator | 2025-05-13 19:55:49.943972 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-13 19:55:49.945692 | orchestrator | Tuesday 13 May 2025 19:55:49 +0000 (0:00:00.182) 0:00:42.500 *********** 2025-05-13 19:55:50.092299 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:50.092928 | orchestrator | 2025-05-13 19:55:50.093298 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-13 19:55:50.095602 | orchestrator | Tuesday 13 May 2025 19:55:50 +0000 (0:00:00.149) 0:00:42.650 *********** 2025-05-13 19:55:50.226746 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:50.227518 | orchestrator | 2025-05-13 19:55:50.229625 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-13 19:55:50.230661 | orchestrator | Tuesday 13 May 2025 19:55:50 +0000 (0:00:00.135) 0:00:42.785 *********** 2025-05-13 19:55:50.364721 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:50.365679 | orchestrator | 2025-05-13 19:55:50.367329 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-13 19:55:50.368977 | orchestrator | Tuesday 13 May 2025 19:55:50 +0000 (0:00:00.135) 0:00:42.921 *********** 2025-05-13 19:55:50.507685 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:50.508845 | orchestrator | 2025-05-13 19:55:50.510145 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-13 19:55:50.511769 | orchestrator | Tuesday 13 May 2025 19:55:50 +0000 (0:00:00.145) 0:00:43.066 *********** 2025-05-13 19:55:50.848552 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:50.852597 | orchestrator | 2025-05-13 19:55:50.854205 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-13 19:55:50.855432 | orchestrator | Tuesday 13 May 2025 19:55:50 +0000 (0:00:00.340) 0:00:43.407 *********** 2025-05-13 19:55:50.994586 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:50.995177 | orchestrator | 2025-05-13 19:55:50.996473 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-13 19:55:50.997753 | orchestrator | Tuesday 13 May 2025 19:55:50 +0000 (0:00:00.144) 0:00:43.551 *********** 2025-05-13 19:55:51.144825 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:51.144895 | orchestrator | 2025-05-13 19:55:51.145714 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-13 19:55:51.146881 | orchestrator | Tuesday 13 May 2025 19:55:51 +0000 (0:00:00.150) 0:00:43.702 *********** 2025-05-13 19:55:51.296325 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:51.297114 | orchestrator | 2025-05-13 19:55:51.298422 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-13 19:55:51.298689 | orchestrator | Tuesday 13 May 2025 19:55:51 +0000 (0:00:00.151) 0:00:43.853 *********** 2025-05-13 19:55:51.441557 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:51.442411 | orchestrator | 2025-05-13 19:55:51.445964 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-13 19:55:51.446829 | orchestrator | Tuesday 13 May 2025 19:55:51 +0000 (0:00:00.146) 0:00:44.000 *********** 2025-05-13 19:55:51.570399 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:51.572071 | orchestrator | 2025-05-13 19:55:51.574156 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-13 19:55:51.574888 | orchestrator | Tuesday 13 May 2025 19:55:51 +0000 (0:00:00.127) 0:00:44.127 *********** 2025-05-13 19:55:51.697996 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:51.698613 | orchestrator | 2025-05-13 19:55:51.699475 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-13 19:55:51.700388 | orchestrator | Tuesday 13 May 2025 19:55:51 +0000 (0:00:00.128) 0:00:44.256 *********** 2025-05-13 19:55:51.819332 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:51.819940 | orchestrator | 2025-05-13 19:55:51.821345 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-13 19:55:51.822618 | orchestrator | Tuesday 13 May 2025 19:55:51 +0000 (0:00:00.121) 0:00:44.378 *********** 2025-05-13 19:55:51.966087 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:51.966275 | orchestrator | 2025-05-13 19:55:51.967143 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-13 19:55:51.967994 | orchestrator | Tuesday 13 May 2025 19:55:51 +0000 (0:00:00.146) 0:00:44.524 *********** 2025-05-13 19:55:52.113135 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7ef241c-3ce4-53e3-9962-a0236c38cab6', 'data_vg': 'ceph-c7ef241c-3ce4-53e3-9962-a0236c38cab6'})  2025-05-13 19:55:52.113322 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53409cd5-715f-5221-bc58-8adc9fe4a6bc', 'data_vg': 'ceph-53409cd5-715f-5221-bc58-8adc9fe4a6bc'})  2025-05-13 19:55:52.114128 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:52.114781 | orchestrator | 2025-05-13 19:55:52.115493 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-13 19:55:52.116670 | orchestrator | Tuesday 13 May 2025 19:55:52 +0000 (0:00:00.146) 0:00:44.670 *********** 2025-05-13 19:55:52.280475 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7ef241c-3ce4-53e3-9962-a0236c38cab6', 'data_vg': 'ceph-c7ef241c-3ce4-53e3-9962-a0236c38cab6'})  2025-05-13 19:55:52.281809 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53409cd5-715f-5221-bc58-8adc9fe4a6bc', 'data_vg': 'ceph-53409cd5-715f-5221-bc58-8adc9fe4a6bc'})  2025-05-13 19:55:52.282541 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:52.284287 | orchestrator | 2025-05-13 19:55:52.285235 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-13 19:55:52.286173 | orchestrator | Tuesday 13 May 2025 19:55:52 +0000 (0:00:00.167) 0:00:44.838 *********** 2025-05-13 19:55:52.451753 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7ef241c-3ce4-53e3-9962-a0236c38cab6', 'data_vg': 'ceph-c7ef241c-3ce4-53e3-9962-a0236c38cab6'})  2025-05-13 19:55:52.451933 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53409cd5-715f-5221-bc58-8adc9fe4a6bc', 'data_vg': 'ceph-53409cd5-715f-5221-bc58-8adc9fe4a6bc'})  2025-05-13 19:55:52.453718 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:52.453976 | orchestrator | 2025-05-13 19:55:52.453998 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-13 19:55:52.454100 | orchestrator | Tuesday 13 May 2025 19:55:52 +0000 (0:00:00.169) 0:00:45.008 *********** 2025-05-13 19:55:52.817631 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7ef241c-3ce4-53e3-9962-a0236c38cab6', 'data_vg': 'ceph-c7ef241c-3ce4-53e3-9962-a0236c38cab6'})  2025-05-13 19:55:52.818159 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53409cd5-715f-5221-bc58-8adc9fe4a6bc', 'data_vg': 'ceph-53409cd5-715f-5221-bc58-8adc9fe4a6bc'})  2025-05-13 19:55:52.819729 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:52.820449 | orchestrator | 2025-05-13 19:55:52.822104 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-13 19:55:52.822137 | orchestrator | Tuesday 13 May 2025 19:55:52 +0000 (0:00:00.366) 0:00:45.375 *********** 2025-05-13 19:55:52.990341 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7ef241c-3ce4-53e3-9962-a0236c38cab6', 'data_vg': 'ceph-c7ef241c-3ce4-53e3-9962-a0236c38cab6'})  2025-05-13 19:55:52.990514 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53409cd5-715f-5221-bc58-8adc9fe4a6bc', 'data_vg': 'ceph-53409cd5-715f-5221-bc58-8adc9fe4a6bc'})  2025-05-13 19:55:52.993599 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:52.994736 | orchestrator | 2025-05-13 19:55:52.996688 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-13 19:55:52.997732 | orchestrator | Tuesday 13 May 2025 19:55:52 +0000 (0:00:00.172) 0:00:45.548 *********** 2025-05-13 19:55:53.162172 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7ef241c-3ce4-53e3-9962-a0236c38cab6', 'data_vg': 'ceph-c7ef241c-3ce4-53e3-9962-a0236c38cab6'})  2025-05-13 19:55:53.163172 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53409cd5-715f-5221-bc58-8adc9fe4a6bc', 'data_vg': 'ceph-53409cd5-715f-5221-bc58-8adc9fe4a6bc'})  2025-05-13 19:55:53.165351 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:53.166388 | orchestrator | 2025-05-13 19:55:53.167162 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-13 19:55:53.168083 | orchestrator | Tuesday 13 May 2025 19:55:53 +0000 (0:00:00.170) 0:00:45.718 *********** 2025-05-13 19:55:53.302346 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7ef241c-3ce4-53e3-9962-a0236c38cab6', 'data_vg': 'ceph-c7ef241c-3ce4-53e3-9962-a0236c38cab6'})  2025-05-13 19:55:53.302551 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53409cd5-715f-5221-bc58-8adc9fe4a6bc', 'data_vg': 'ceph-53409cd5-715f-5221-bc58-8adc9fe4a6bc'})  2025-05-13 19:55:53.303778 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:53.305069 | orchestrator | 2025-05-13 19:55:53.305316 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-13 19:55:53.306897 | orchestrator | Tuesday 13 May 2025 19:55:53 +0000 (0:00:00.140) 0:00:45.859 *********** 2025-05-13 19:55:53.463756 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7ef241c-3ce4-53e3-9962-a0236c38cab6', 'data_vg': 'ceph-c7ef241c-3ce4-53e3-9962-a0236c38cab6'})  2025-05-13 19:55:53.464563 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53409cd5-715f-5221-bc58-8adc9fe4a6bc', 'data_vg': 'ceph-53409cd5-715f-5221-bc58-8adc9fe4a6bc'})  2025-05-13 19:55:53.467880 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:53.467949 | orchestrator | 2025-05-13 19:55:53.468061 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-13 19:55:53.469186 | orchestrator | Tuesday 13 May 2025 19:55:53 +0000 (0:00:00.162) 0:00:46.021 *********** 2025-05-13 19:55:53.985175 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:55:53.985360 | orchestrator | 2025-05-13 19:55:53.985380 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-13 19:55:53.987971 | orchestrator | Tuesday 13 May 2025 19:55:53 +0000 (0:00:00.517) 0:00:46.539 *********** 2025-05-13 19:55:54.488414 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:55:54.488546 | orchestrator | 2025-05-13 19:55:54.488685 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-13 19:55:54.488752 | orchestrator | Tuesday 13 May 2025 19:55:54 +0000 (0:00:00.507) 0:00:47.047 *********** 2025-05-13 19:55:54.647483 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:55:54.647629 | orchestrator | 2025-05-13 19:55:54.648104 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-13 19:55:54.648607 | orchestrator | Tuesday 13 May 2025 19:55:54 +0000 (0:00:00.157) 0:00:47.205 *********** 2025-05-13 19:55:54.842538 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-53409cd5-715f-5221-bc58-8adc9fe4a6bc', 'vg_name': 'ceph-53409cd5-715f-5221-bc58-8adc9fe4a6bc'}) 2025-05-13 19:55:54.842654 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-c7ef241c-3ce4-53e3-9962-a0236c38cab6', 'vg_name': 'ceph-c7ef241c-3ce4-53e3-9962-a0236c38cab6'}) 2025-05-13 19:55:54.842669 | orchestrator | 2025-05-13 19:55:54.843180 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-13 19:55:54.843208 | orchestrator | Tuesday 13 May 2025 19:55:54 +0000 (0:00:00.193) 0:00:47.399 *********** 2025-05-13 19:55:55.015545 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7ef241c-3ce4-53e3-9962-a0236c38cab6', 'data_vg': 'ceph-c7ef241c-3ce4-53e3-9962-a0236c38cab6'})  2025-05-13 19:55:55.015753 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53409cd5-715f-5221-bc58-8adc9fe4a6bc', 'data_vg': 'ceph-53409cd5-715f-5221-bc58-8adc9fe4a6bc'})  2025-05-13 19:55:55.016440 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:55.016952 | orchestrator | 2025-05-13 19:55:55.017304 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-13 19:55:55.017774 | orchestrator | Tuesday 13 May 2025 19:55:55 +0000 (0:00:00.175) 0:00:47.575 *********** 2025-05-13 19:55:55.162971 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7ef241c-3ce4-53e3-9962-a0236c38cab6', 'data_vg': 'ceph-c7ef241c-3ce4-53e3-9962-a0236c38cab6'})  2025-05-13 19:55:55.163818 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53409cd5-715f-5221-bc58-8adc9fe4a6bc', 'data_vg': 'ceph-53409cd5-715f-5221-bc58-8adc9fe4a6bc'})  2025-05-13 19:55:55.164319 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:55.165326 | orchestrator | 2025-05-13 19:55:55.166846 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-13 19:55:55.166871 | orchestrator | Tuesday 13 May 2025 19:55:55 +0000 (0:00:00.146) 0:00:47.721 *********** 2025-05-13 19:55:55.329980 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7ef241c-3ce4-53e3-9962-a0236c38cab6', 'data_vg': 'ceph-c7ef241c-3ce4-53e3-9962-a0236c38cab6'})  2025-05-13 19:55:55.331162 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-53409cd5-715f-5221-bc58-8adc9fe4a6bc', 'data_vg': 'ceph-53409cd5-715f-5221-bc58-8adc9fe4a6bc'})  2025-05-13 19:55:55.332054 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:55:55.333502 | orchestrator | 2025-05-13 19:55:55.335471 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-13 19:55:55.337358 | orchestrator | Tuesday 13 May 2025 19:55:55 +0000 (0:00:00.166) 0:00:47.888 *********** 2025-05-13 19:55:55.814834 | orchestrator | ok: [testbed-node-4] => { 2025-05-13 19:55:55.816394 | orchestrator |  "lvm_report": { 2025-05-13 19:55:55.817435 | orchestrator |  "lv": [ 2025-05-13 19:55:55.818445 | orchestrator |  { 2025-05-13 19:55:55.820023 | orchestrator |  "lv_name": "osd-block-53409cd5-715f-5221-bc58-8adc9fe4a6bc", 2025-05-13 19:55:55.821247 | orchestrator |  "vg_name": "ceph-53409cd5-715f-5221-bc58-8adc9fe4a6bc" 2025-05-13 19:55:55.822228 | orchestrator |  }, 2025-05-13 19:55:55.822975 | orchestrator |  { 2025-05-13 19:55:55.823817 | orchestrator |  "lv_name": "osd-block-c7ef241c-3ce4-53e3-9962-a0236c38cab6", 2025-05-13 19:55:55.824721 | orchestrator |  "vg_name": "ceph-c7ef241c-3ce4-53e3-9962-a0236c38cab6" 2025-05-13 19:55:55.826536 | orchestrator |  } 2025-05-13 19:55:55.826836 | orchestrator |  ], 2025-05-13 19:55:55.827622 | orchestrator |  "pv": [ 2025-05-13 19:55:55.827955 | orchestrator |  { 2025-05-13 19:55:55.828593 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-13 19:55:55.828929 | orchestrator |  "vg_name": "ceph-c7ef241c-3ce4-53e3-9962-a0236c38cab6" 2025-05-13 19:55:55.829433 | orchestrator |  }, 2025-05-13 19:55:55.829767 | orchestrator |  { 2025-05-13 19:55:55.830249 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-13 19:55:55.830696 | orchestrator |  "vg_name": "ceph-53409cd5-715f-5221-bc58-8adc9fe4a6bc" 2025-05-13 19:55:55.831129 | orchestrator |  } 2025-05-13 19:55:55.831550 | orchestrator |  ] 2025-05-13 19:55:55.832099 | orchestrator |  } 2025-05-13 19:55:55.832392 | orchestrator | } 2025-05-13 19:55:55.832926 | orchestrator | 2025-05-13 19:55:55.833396 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-13 19:55:55.833683 | orchestrator | 2025-05-13 19:55:55.834174 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-13 19:55:55.834422 | orchestrator | Tuesday 13 May 2025 19:55:55 +0000 (0:00:00.484) 0:00:48.372 *********** 2025-05-13 19:55:56.075442 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-13 19:55:56.075653 | orchestrator | 2025-05-13 19:55:56.075677 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-13 19:55:56.076426 | orchestrator | Tuesday 13 May 2025 19:55:56 +0000 (0:00:00.260) 0:00:48.633 *********** 2025-05-13 19:55:56.304908 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:55:56.305084 | orchestrator | 2025-05-13 19:55:56.305105 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:56.305119 | orchestrator | Tuesday 13 May 2025 19:55:56 +0000 (0:00:00.228) 0:00:48.861 *********** 2025-05-13 19:55:56.703665 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-13 19:55:56.704908 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-13 19:55:56.706369 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-13 19:55:56.707950 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-13 19:55:56.709404 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-13 19:55:56.710734 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-13 19:55:56.711605 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-13 19:55:56.712331 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-13 19:55:56.713225 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-13 19:55:56.713958 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-13 19:55:56.714813 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-13 19:55:56.715380 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-13 19:55:56.716070 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-13 19:55:56.716697 | orchestrator | 2025-05-13 19:55:56.717602 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:56.718213 | orchestrator | Tuesday 13 May 2025 19:55:56 +0000 (0:00:00.398) 0:00:49.260 *********** 2025-05-13 19:55:56.910176 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:55:56.910280 | orchestrator | 2025-05-13 19:55:56.910897 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:56.911428 | orchestrator | Tuesday 13 May 2025 19:55:56 +0000 (0:00:00.207) 0:00:49.468 *********** 2025-05-13 19:55:57.115962 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:55:57.116250 | orchestrator | 2025-05-13 19:55:57.117337 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:57.117976 | orchestrator | Tuesday 13 May 2025 19:55:57 +0000 (0:00:00.205) 0:00:49.674 *********** 2025-05-13 19:55:57.307958 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:55:57.308337 | orchestrator | 2025-05-13 19:55:57.308825 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:57.309623 | orchestrator | Tuesday 13 May 2025 19:55:57 +0000 (0:00:00.192) 0:00:49.866 *********** 2025-05-13 19:55:57.497869 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:55:57.498894 | orchestrator | 2025-05-13 19:55:57.500046 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:57.500620 | orchestrator | Tuesday 13 May 2025 19:55:57 +0000 (0:00:00.190) 0:00:50.056 *********** 2025-05-13 19:55:57.683758 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:55:57.684395 | orchestrator | 2025-05-13 19:55:57.685859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:57.686367 | orchestrator | Tuesday 13 May 2025 19:55:57 +0000 (0:00:00.185) 0:00:50.241 *********** 2025-05-13 19:55:58.255856 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:55:58.256113 | orchestrator | 2025-05-13 19:55:58.256978 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:58.258140 | orchestrator | Tuesday 13 May 2025 19:55:58 +0000 (0:00:00.573) 0:00:50.814 *********** 2025-05-13 19:55:58.443321 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:55:58.443422 | orchestrator | 2025-05-13 19:55:58.444052 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:58.444884 | orchestrator | Tuesday 13 May 2025 19:55:58 +0000 (0:00:00.186) 0:00:51.001 *********** 2025-05-13 19:55:58.636330 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:55:58.637250 | orchestrator | 2025-05-13 19:55:58.638391 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:58.638791 | orchestrator | Tuesday 13 May 2025 19:55:58 +0000 (0:00:00.193) 0:00:51.194 *********** 2025-05-13 19:55:59.080242 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af) 2025-05-13 19:55:59.081226 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af) 2025-05-13 19:55:59.082101 | orchestrator | 2025-05-13 19:55:59.082690 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:59.083817 | orchestrator | Tuesday 13 May 2025 19:55:59 +0000 (0:00:00.442) 0:00:51.637 *********** 2025-05-13 19:55:59.507508 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0bd34d58-f920-45be-9e9c-4745e29ec711) 2025-05-13 19:55:59.508854 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0bd34d58-f920-45be-9e9c-4745e29ec711) 2025-05-13 19:55:59.509777 | orchestrator | 2025-05-13 19:55:59.510796 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:59.512099 | orchestrator | Tuesday 13 May 2025 19:55:59 +0000 (0:00:00.428) 0:00:52.066 *********** 2025-05-13 19:55:59.929142 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5a89f530-918e-4949-9347-1038fd288b0d) 2025-05-13 19:55:59.930223 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5a89f530-918e-4949-9347-1038fd288b0d) 2025-05-13 19:55:59.931905 | orchestrator | 2025-05-13 19:55:59.932279 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:55:59.933196 | orchestrator | Tuesday 13 May 2025 19:55:59 +0000 (0:00:00.420) 0:00:52.486 *********** 2025-05-13 19:56:00.347453 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_10c33077-7b2d-46df-acf0-04e3d7859f61) 2025-05-13 19:56:00.347563 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_10c33077-7b2d-46df-acf0-04e3d7859f61) 2025-05-13 19:56:00.349153 | orchestrator | 2025-05-13 19:56:00.350362 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-13 19:56:00.351297 | orchestrator | Tuesday 13 May 2025 19:56:00 +0000 (0:00:00.419) 0:00:52.906 *********** 2025-05-13 19:56:00.682458 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-13 19:56:00.684229 | orchestrator | 2025-05-13 19:56:00.684586 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:56:00.684938 | orchestrator | Tuesday 13 May 2025 19:56:00 +0000 (0:00:00.328) 0:00:53.234 *********** 2025-05-13 19:56:01.083783 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-13 19:56:01.084045 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-13 19:56:01.085595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-13 19:56:01.085668 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-13 19:56:01.087470 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-13 19:56:01.088143 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-13 19:56:01.089217 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-13 19:56:01.090250 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-13 19:56:01.091495 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-13 19:56:01.092154 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-13 19:56:01.093321 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-13 19:56:01.093947 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-13 19:56:01.094894 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-13 19:56:01.095698 | orchestrator | 2025-05-13 19:56:01.096531 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:56:01.097222 | orchestrator | Tuesday 13 May 2025 19:56:01 +0000 (0:00:00.406) 0:00:53.640 *********** 2025-05-13 19:56:01.292775 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:01.292967 | orchestrator | 2025-05-13 19:56:01.293750 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:56:01.294217 | orchestrator | Tuesday 13 May 2025 19:56:01 +0000 (0:00:00.210) 0:00:53.851 *********** 2025-05-13 19:56:01.482075 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:01.483118 | orchestrator | 2025-05-13 19:56:01.484034 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:56:01.485939 | orchestrator | Tuesday 13 May 2025 19:56:01 +0000 (0:00:00.189) 0:00:54.040 *********** 2025-05-13 19:56:02.120075 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:02.120485 | orchestrator | 2025-05-13 19:56:02.121195 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:56:02.121504 | orchestrator | Tuesday 13 May 2025 19:56:02 +0000 (0:00:00.636) 0:00:54.677 *********** 2025-05-13 19:56:02.365321 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:02.365478 | orchestrator | 2025-05-13 19:56:02.366565 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:56:02.367243 | orchestrator | Tuesday 13 May 2025 19:56:02 +0000 (0:00:00.246) 0:00:54.923 *********** 2025-05-13 19:56:02.576117 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:02.577135 | orchestrator | 2025-05-13 19:56:02.578216 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:56:02.579734 | orchestrator | Tuesday 13 May 2025 19:56:02 +0000 (0:00:00.210) 0:00:55.134 *********** 2025-05-13 19:56:02.772194 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:02.772453 | orchestrator | 2025-05-13 19:56:02.772912 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:56:02.775298 | orchestrator | Tuesday 13 May 2025 19:56:02 +0000 (0:00:00.195) 0:00:55.330 *********** 2025-05-13 19:56:02.989474 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:02.990368 | orchestrator | 2025-05-13 19:56:02.991541 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:56:02.992797 | orchestrator | Tuesday 13 May 2025 19:56:02 +0000 (0:00:00.217) 0:00:55.547 *********** 2025-05-13 19:56:03.199180 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:03.199674 | orchestrator | 2025-05-13 19:56:03.200636 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:56:03.201303 | orchestrator | Tuesday 13 May 2025 19:56:03 +0000 (0:00:00.210) 0:00:55.758 *********** 2025-05-13 19:56:03.868382 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-13 19:56:03.869387 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-13 19:56:03.870534 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-13 19:56:03.871285 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-13 19:56:03.871934 | orchestrator | 2025-05-13 19:56:03.873180 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:56:03.873285 | orchestrator | Tuesday 13 May 2025 19:56:03 +0000 (0:00:00.667) 0:00:56.425 *********** 2025-05-13 19:56:04.056901 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:04.057137 | orchestrator | 2025-05-13 19:56:04.058565 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:56:04.059400 | orchestrator | Tuesday 13 May 2025 19:56:04 +0000 (0:00:00.189) 0:00:56.614 *********** 2025-05-13 19:56:04.267671 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:04.269053 | orchestrator | 2025-05-13 19:56:04.269913 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:56:04.270835 | orchestrator | Tuesday 13 May 2025 19:56:04 +0000 (0:00:00.211) 0:00:56.825 *********** 2025-05-13 19:56:04.466554 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:04.466684 | orchestrator | 2025-05-13 19:56:04.467153 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-13 19:56:04.467801 | orchestrator | Tuesday 13 May 2025 19:56:04 +0000 (0:00:00.200) 0:00:57.025 *********** 2025-05-13 19:56:04.652351 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:04.653016 | orchestrator | 2025-05-13 19:56:04.653737 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-13 19:56:04.654813 | orchestrator | Tuesday 13 May 2025 19:56:04 +0000 (0:00:00.185) 0:00:57.211 *********** 2025-05-13 19:56:04.973010 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:04.973378 | orchestrator | 2025-05-13 19:56:04.974157 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-13 19:56:04.975243 | orchestrator | Tuesday 13 May 2025 19:56:04 +0000 (0:00:00.317) 0:00:57.528 *********** 2025-05-13 19:56:05.160233 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9e27190a-cad1-5451-a880-ae60fcff608c'}}) 2025-05-13 19:56:05.160477 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6f4317e9-8e5a-55d6-81df-460521249898'}}) 2025-05-13 19:56:05.161245 | orchestrator | 2025-05-13 19:56:05.161632 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-13 19:56:05.162067 | orchestrator | Tuesday 13 May 2025 19:56:05 +0000 (0:00:00.190) 0:00:57.719 *********** 2025-05-13 19:56:06.982841 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9e27190a-cad1-5451-a880-ae60fcff608c', 'data_vg': 'ceph-9e27190a-cad1-5451-a880-ae60fcff608c'}) 2025-05-13 19:56:06.983407 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6f4317e9-8e5a-55d6-81df-460521249898', 'data_vg': 'ceph-6f4317e9-8e5a-55d6-81df-460521249898'}) 2025-05-13 19:56:06.984216 | orchestrator | 2025-05-13 19:56:06.984934 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-13 19:56:06.985952 | orchestrator | Tuesday 13 May 2025 19:56:06 +0000 (0:00:01.818) 0:00:59.537 *********** 2025-05-13 19:56:07.147202 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9e27190a-cad1-5451-a880-ae60fcff608c', 'data_vg': 'ceph-9e27190a-cad1-5451-a880-ae60fcff608c'})  2025-05-13 19:56:07.147898 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f4317e9-8e5a-55d6-81df-460521249898', 'data_vg': 'ceph-6f4317e9-8e5a-55d6-81df-460521249898'})  2025-05-13 19:56:07.148800 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:07.150733 | orchestrator | 2025-05-13 19:56:07.150796 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-13 19:56:07.151098 | orchestrator | Tuesday 13 May 2025 19:56:07 +0000 (0:00:00.167) 0:00:59.705 *********** 2025-05-13 19:56:08.441391 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9e27190a-cad1-5451-a880-ae60fcff608c', 'data_vg': 'ceph-9e27190a-cad1-5451-a880-ae60fcff608c'}) 2025-05-13 19:56:08.442753 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6f4317e9-8e5a-55d6-81df-460521249898', 'data_vg': 'ceph-6f4317e9-8e5a-55d6-81df-460521249898'}) 2025-05-13 19:56:08.444748 | orchestrator | 2025-05-13 19:56:08.445116 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-13 19:56:08.446229 | orchestrator | Tuesday 13 May 2025 19:56:08 +0000 (0:00:01.293) 0:01:00.998 *********** 2025-05-13 19:56:08.583635 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9e27190a-cad1-5451-a880-ae60fcff608c', 'data_vg': 'ceph-9e27190a-cad1-5451-a880-ae60fcff608c'})  2025-05-13 19:56:08.583737 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f4317e9-8e5a-55d6-81df-460521249898', 'data_vg': 'ceph-6f4317e9-8e5a-55d6-81df-460521249898'})  2025-05-13 19:56:08.583752 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:08.584092 | orchestrator | 2025-05-13 19:56:08.584726 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-13 19:56:08.585174 | orchestrator | Tuesday 13 May 2025 19:56:08 +0000 (0:00:00.142) 0:01:01.140 *********** 2025-05-13 19:56:08.726473 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:08.727705 | orchestrator | 2025-05-13 19:56:08.728658 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-13 19:56:08.729535 | orchestrator | Tuesday 13 May 2025 19:56:08 +0000 (0:00:00.144) 0:01:01.285 *********** 2025-05-13 19:56:08.875219 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9e27190a-cad1-5451-a880-ae60fcff608c', 'data_vg': 'ceph-9e27190a-cad1-5451-a880-ae60fcff608c'})  2025-05-13 19:56:08.877208 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f4317e9-8e5a-55d6-81df-460521249898', 'data_vg': 'ceph-6f4317e9-8e5a-55d6-81df-460521249898'})  2025-05-13 19:56:08.878062 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:08.878729 | orchestrator | 2025-05-13 19:56:08.879291 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-13 19:56:08.880138 | orchestrator | Tuesday 13 May 2025 19:56:08 +0000 (0:00:00.147) 0:01:01.432 *********** 2025-05-13 19:56:08.998448 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:08.998757 | orchestrator | 2025-05-13 19:56:09.000099 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-13 19:56:09.000213 | orchestrator | Tuesday 13 May 2025 19:56:08 +0000 (0:00:00.124) 0:01:01.557 *********** 2025-05-13 19:56:09.158309 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9e27190a-cad1-5451-a880-ae60fcff608c', 'data_vg': 'ceph-9e27190a-cad1-5451-a880-ae60fcff608c'})  2025-05-13 19:56:09.159461 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f4317e9-8e5a-55d6-81df-460521249898', 'data_vg': 'ceph-6f4317e9-8e5a-55d6-81df-460521249898'})  2025-05-13 19:56:09.160575 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:09.161262 | orchestrator | 2025-05-13 19:56:09.162171 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-13 19:56:09.163044 | orchestrator | Tuesday 13 May 2025 19:56:09 +0000 (0:00:00.158) 0:01:01.716 *********** 2025-05-13 19:56:09.277595 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:09.278470 | orchestrator | 2025-05-13 19:56:09.279537 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-13 19:56:09.280251 | orchestrator | Tuesday 13 May 2025 19:56:09 +0000 (0:00:00.118) 0:01:01.834 *********** 2025-05-13 19:56:09.419875 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9e27190a-cad1-5451-a880-ae60fcff608c', 'data_vg': 'ceph-9e27190a-cad1-5451-a880-ae60fcff608c'})  2025-05-13 19:56:09.421066 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f4317e9-8e5a-55d6-81df-460521249898', 'data_vg': 'ceph-6f4317e9-8e5a-55d6-81df-460521249898'})  2025-05-13 19:56:09.421765 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:09.422668 | orchestrator | 2025-05-13 19:56:09.423661 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-13 19:56:09.424269 | orchestrator | Tuesday 13 May 2025 19:56:09 +0000 (0:00:00.143) 0:01:01.978 *********** 2025-05-13 19:56:09.760886 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:56:09.761699 | orchestrator | 2025-05-13 19:56:09.762661 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-13 19:56:09.763382 | orchestrator | Tuesday 13 May 2025 19:56:09 +0000 (0:00:00.340) 0:01:02.319 *********** 2025-05-13 19:56:09.920181 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9e27190a-cad1-5451-a880-ae60fcff608c', 'data_vg': 'ceph-9e27190a-cad1-5451-a880-ae60fcff608c'})  2025-05-13 19:56:09.920668 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f4317e9-8e5a-55d6-81df-460521249898', 'data_vg': 'ceph-6f4317e9-8e5a-55d6-81df-460521249898'})  2025-05-13 19:56:09.922216 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:09.922947 | orchestrator | 2025-05-13 19:56:09.923724 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-13 19:56:09.924368 | orchestrator | Tuesday 13 May 2025 19:56:09 +0000 (0:00:00.159) 0:01:02.478 *********** 2025-05-13 19:56:10.069149 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9e27190a-cad1-5451-a880-ae60fcff608c', 'data_vg': 'ceph-9e27190a-cad1-5451-a880-ae60fcff608c'})  2025-05-13 19:56:10.069942 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f4317e9-8e5a-55d6-81df-460521249898', 'data_vg': 'ceph-6f4317e9-8e5a-55d6-81df-460521249898'})  2025-05-13 19:56:10.070912 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:10.071689 | orchestrator | 2025-05-13 19:56:10.073023 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-13 19:56:10.073570 | orchestrator | Tuesday 13 May 2025 19:56:10 +0000 (0:00:00.149) 0:01:02.627 *********** 2025-05-13 19:56:10.236240 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9e27190a-cad1-5451-a880-ae60fcff608c', 'data_vg': 'ceph-9e27190a-cad1-5451-a880-ae60fcff608c'})  2025-05-13 19:56:10.236754 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f4317e9-8e5a-55d6-81df-460521249898', 'data_vg': 'ceph-6f4317e9-8e5a-55d6-81df-460521249898'})  2025-05-13 19:56:10.240281 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:10.242531 | orchestrator | 2025-05-13 19:56:10.243140 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-13 19:56:10.243792 | orchestrator | Tuesday 13 May 2025 19:56:10 +0000 (0:00:00.167) 0:01:02.794 *********** 2025-05-13 19:56:10.374914 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:10.376106 | orchestrator | 2025-05-13 19:56:10.377165 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-13 19:56:10.379245 | orchestrator | Tuesday 13 May 2025 19:56:10 +0000 (0:00:00.138) 0:01:02.933 *********** 2025-05-13 19:56:10.509639 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:10.509815 | orchestrator | 2025-05-13 19:56:10.511553 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-13 19:56:10.511577 | orchestrator | Tuesday 13 May 2025 19:56:10 +0000 (0:00:00.133) 0:01:03.066 *********** 2025-05-13 19:56:10.650105 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:10.650213 | orchestrator | 2025-05-13 19:56:10.651146 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-13 19:56:10.651861 | orchestrator | Tuesday 13 May 2025 19:56:10 +0000 (0:00:00.141) 0:01:03.207 *********** 2025-05-13 19:56:10.792440 | orchestrator | ok: [testbed-node-5] => { 2025-05-13 19:56:10.795110 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-13 19:56:10.797446 | orchestrator | } 2025-05-13 19:56:10.797473 | orchestrator | 2025-05-13 19:56:10.799686 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-13 19:56:10.799713 | orchestrator | Tuesday 13 May 2025 19:56:10 +0000 (0:00:00.139) 0:01:03.347 *********** 2025-05-13 19:56:10.930314 | orchestrator | ok: [testbed-node-5] => { 2025-05-13 19:56:10.932520 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-13 19:56:10.934103 | orchestrator | } 2025-05-13 19:56:10.934455 | orchestrator | 2025-05-13 19:56:10.935377 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-13 19:56:10.936071 | orchestrator | Tuesday 13 May 2025 19:56:10 +0000 (0:00:00.141) 0:01:03.488 *********** 2025-05-13 19:56:11.068573 | orchestrator | ok: [testbed-node-5] => { 2025-05-13 19:56:11.068737 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-13 19:56:11.069072 | orchestrator | } 2025-05-13 19:56:11.070109 | orchestrator | 2025-05-13 19:56:11.071232 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-13 19:56:11.073085 | orchestrator | Tuesday 13 May 2025 19:56:11 +0000 (0:00:00.139) 0:01:03.628 *********** 2025-05-13 19:56:11.574410 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:56:11.575790 | orchestrator | 2025-05-13 19:56:11.575819 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-13 19:56:11.576543 | orchestrator | Tuesday 13 May 2025 19:56:11 +0000 (0:00:00.504) 0:01:04.133 *********** 2025-05-13 19:56:12.082829 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:56:12.082984 | orchestrator | 2025-05-13 19:56:12.083883 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-13 19:56:12.085221 | orchestrator | Tuesday 13 May 2025 19:56:12 +0000 (0:00:00.506) 0:01:04.639 *********** 2025-05-13 19:56:12.790240 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:56:12.792460 | orchestrator | 2025-05-13 19:56:12.796088 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-13 19:56:12.796643 | orchestrator | Tuesday 13 May 2025 19:56:12 +0000 (0:00:00.708) 0:01:05.347 *********** 2025-05-13 19:56:12.936150 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:56:12.936698 | orchestrator | 2025-05-13 19:56:12.937242 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-13 19:56:12.938893 | orchestrator | Tuesday 13 May 2025 19:56:12 +0000 (0:00:00.147) 0:01:05.495 *********** 2025-05-13 19:56:13.069380 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:13.070408 | orchestrator | 2025-05-13 19:56:13.071279 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-13 19:56:13.072147 | orchestrator | Tuesday 13 May 2025 19:56:13 +0000 (0:00:00.132) 0:01:05.628 *********** 2025-05-13 19:56:13.189754 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:13.190385 | orchestrator | 2025-05-13 19:56:13.191456 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-13 19:56:13.191977 | orchestrator | Tuesday 13 May 2025 19:56:13 +0000 (0:00:00.118) 0:01:05.746 *********** 2025-05-13 19:56:13.327641 | orchestrator | ok: [testbed-node-5] => { 2025-05-13 19:56:13.328522 | orchestrator |  "vgs_report": { 2025-05-13 19:56:13.329814 | orchestrator |  "vg": [] 2025-05-13 19:56:13.330429 | orchestrator |  } 2025-05-13 19:56:13.331717 | orchestrator | } 2025-05-13 19:56:13.332134 | orchestrator | 2025-05-13 19:56:13.332919 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-13 19:56:13.334102 | orchestrator | Tuesday 13 May 2025 19:56:13 +0000 (0:00:00.139) 0:01:05.886 *********** 2025-05-13 19:56:13.468748 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:13.471737 | orchestrator | 2025-05-13 19:56:13.473614 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-13 19:56:13.474841 | orchestrator | Tuesday 13 May 2025 19:56:13 +0000 (0:00:00.138) 0:01:06.024 *********** 2025-05-13 19:56:13.611195 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:13.611386 | orchestrator | 2025-05-13 19:56:13.612704 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-13 19:56:13.614134 | orchestrator | Tuesday 13 May 2025 19:56:13 +0000 (0:00:00.144) 0:01:06.169 *********** 2025-05-13 19:56:13.745038 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:13.746188 | orchestrator | 2025-05-13 19:56:13.747376 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-13 19:56:13.748844 | orchestrator | Tuesday 13 May 2025 19:56:13 +0000 (0:00:00.134) 0:01:06.303 *********** 2025-05-13 19:56:13.868173 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:13.868861 | orchestrator | 2025-05-13 19:56:13.870279 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-13 19:56:13.871478 | orchestrator | Tuesday 13 May 2025 19:56:13 +0000 (0:00:00.122) 0:01:06.426 *********** 2025-05-13 19:56:14.001707 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:14.002200 | orchestrator | 2025-05-13 19:56:14.002831 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-13 19:56:14.003829 | orchestrator | Tuesday 13 May 2025 19:56:13 +0000 (0:00:00.133) 0:01:06.560 *********** 2025-05-13 19:56:14.144039 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:14.144701 | orchestrator | 2025-05-13 19:56:14.145340 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-13 19:56:14.146163 | orchestrator | Tuesday 13 May 2025 19:56:14 +0000 (0:00:00.142) 0:01:06.703 *********** 2025-05-13 19:56:14.285482 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:14.287102 | orchestrator | 2025-05-13 19:56:14.288181 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-13 19:56:14.290162 | orchestrator | Tuesday 13 May 2025 19:56:14 +0000 (0:00:00.139) 0:01:06.842 *********** 2025-05-13 19:56:14.437888 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:14.438788 | orchestrator | 2025-05-13 19:56:14.440701 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-13 19:56:14.441228 | orchestrator | Tuesday 13 May 2025 19:56:14 +0000 (0:00:00.152) 0:01:06.995 *********** 2025-05-13 19:56:14.781838 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:14.782924 | orchestrator | 2025-05-13 19:56:14.783118 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-13 19:56:14.783722 | orchestrator | Tuesday 13 May 2025 19:56:14 +0000 (0:00:00.345) 0:01:07.340 *********** 2025-05-13 19:56:14.926349 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:14.927091 | orchestrator | 2025-05-13 19:56:14.927766 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-13 19:56:14.928771 | orchestrator | Tuesday 13 May 2025 19:56:14 +0000 (0:00:00.144) 0:01:07.485 *********** 2025-05-13 19:56:15.059599 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:15.060313 | orchestrator | 2025-05-13 19:56:15.062005 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-13 19:56:15.062587 | orchestrator | Tuesday 13 May 2025 19:56:15 +0000 (0:00:00.132) 0:01:07.617 *********** 2025-05-13 19:56:15.211838 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:15.213223 | orchestrator | 2025-05-13 19:56:15.214460 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-13 19:56:15.215200 | orchestrator | Tuesday 13 May 2025 19:56:15 +0000 (0:00:00.152) 0:01:07.769 *********** 2025-05-13 19:56:15.340646 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:15.341626 | orchestrator | 2025-05-13 19:56:15.342867 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-13 19:56:15.343236 | orchestrator | Tuesday 13 May 2025 19:56:15 +0000 (0:00:00.129) 0:01:07.899 *********** 2025-05-13 19:56:15.477654 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:15.479581 | orchestrator | 2025-05-13 19:56:15.480222 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-13 19:56:15.481178 | orchestrator | Tuesday 13 May 2025 19:56:15 +0000 (0:00:00.136) 0:01:08.036 *********** 2025-05-13 19:56:15.635447 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9e27190a-cad1-5451-a880-ae60fcff608c', 'data_vg': 'ceph-9e27190a-cad1-5451-a880-ae60fcff608c'})  2025-05-13 19:56:15.636771 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f4317e9-8e5a-55d6-81df-460521249898', 'data_vg': 'ceph-6f4317e9-8e5a-55d6-81df-460521249898'})  2025-05-13 19:56:15.637351 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:15.638123 | orchestrator | 2025-05-13 19:56:15.638783 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-13 19:56:15.639532 | orchestrator | Tuesday 13 May 2025 19:56:15 +0000 (0:00:00.157) 0:01:08.193 *********** 2025-05-13 19:56:15.778386 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9e27190a-cad1-5451-a880-ae60fcff608c', 'data_vg': 'ceph-9e27190a-cad1-5451-a880-ae60fcff608c'})  2025-05-13 19:56:15.778488 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f4317e9-8e5a-55d6-81df-460521249898', 'data_vg': 'ceph-6f4317e9-8e5a-55d6-81df-460521249898'})  2025-05-13 19:56:15.778586 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:15.780222 | orchestrator | 2025-05-13 19:56:15.781861 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-13 19:56:15.781939 | orchestrator | Tuesday 13 May 2025 19:56:15 +0000 (0:00:00.141) 0:01:08.335 *********** 2025-05-13 19:56:15.930094 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9e27190a-cad1-5451-a880-ae60fcff608c', 'data_vg': 'ceph-9e27190a-cad1-5451-a880-ae60fcff608c'})  2025-05-13 19:56:15.930200 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f4317e9-8e5a-55d6-81df-460521249898', 'data_vg': 'ceph-6f4317e9-8e5a-55d6-81df-460521249898'})  2025-05-13 19:56:15.931036 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:15.931642 | orchestrator | 2025-05-13 19:56:15.932518 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-13 19:56:15.934003 | orchestrator | Tuesday 13 May 2025 19:56:15 +0000 (0:00:00.152) 0:01:08.488 *********** 2025-05-13 19:56:16.085403 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9e27190a-cad1-5451-a880-ae60fcff608c', 'data_vg': 'ceph-9e27190a-cad1-5451-a880-ae60fcff608c'})  2025-05-13 19:56:16.085980 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f4317e9-8e5a-55d6-81df-460521249898', 'data_vg': 'ceph-6f4317e9-8e5a-55d6-81df-460521249898'})  2025-05-13 19:56:16.086759 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:16.087136 | orchestrator | 2025-05-13 19:56:16.087451 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-13 19:56:16.089553 | orchestrator | Tuesday 13 May 2025 19:56:16 +0000 (0:00:00.155) 0:01:08.644 *********** 2025-05-13 19:56:16.233723 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9e27190a-cad1-5451-a880-ae60fcff608c', 'data_vg': 'ceph-9e27190a-cad1-5451-a880-ae60fcff608c'})  2025-05-13 19:56:16.235159 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f4317e9-8e5a-55d6-81df-460521249898', 'data_vg': 'ceph-6f4317e9-8e5a-55d6-81df-460521249898'})  2025-05-13 19:56:16.235903 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:16.237172 | orchestrator | 2025-05-13 19:56:16.238175 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-13 19:56:16.238686 | orchestrator | Tuesday 13 May 2025 19:56:16 +0000 (0:00:00.147) 0:01:08.792 *********** 2025-05-13 19:56:16.368915 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9e27190a-cad1-5451-a880-ae60fcff608c', 'data_vg': 'ceph-9e27190a-cad1-5451-a880-ae60fcff608c'})  2025-05-13 19:56:16.369727 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f4317e9-8e5a-55d6-81df-460521249898', 'data_vg': 'ceph-6f4317e9-8e5a-55d6-81df-460521249898'})  2025-05-13 19:56:16.371208 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:16.372378 | orchestrator | 2025-05-13 19:56:16.373403 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-13 19:56:16.374434 | orchestrator | Tuesday 13 May 2025 19:56:16 +0000 (0:00:00.134) 0:01:08.927 *********** 2025-05-13 19:56:16.728206 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9e27190a-cad1-5451-a880-ae60fcff608c', 'data_vg': 'ceph-9e27190a-cad1-5451-a880-ae60fcff608c'})  2025-05-13 19:56:16.730335 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f4317e9-8e5a-55d6-81df-460521249898', 'data_vg': 'ceph-6f4317e9-8e5a-55d6-81df-460521249898'})  2025-05-13 19:56:16.730471 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:16.730839 | orchestrator | 2025-05-13 19:56:16.731331 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-13 19:56:16.732127 | orchestrator | Tuesday 13 May 2025 19:56:16 +0000 (0:00:00.358) 0:01:09.285 *********** 2025-05-13 19:56:16.879038 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9e27190a-cad1-5451-a880-ae60fcff608c', 'data_vg': 'ceph-9e27190a-cad1-5451-a880-ae60fcff608c'})  2025-05-13 19:56:16.880021 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f4317e9-8e5a-55d6-81df-460521249898', 'data_vg': 'ceph-6f4317e9-8e5a-55d6-81df-460521249898'})  2025-05-13 19:56:16.882511 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:16.883096 | orchestrator | 2025-05-13 19:56:16.884201 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-13 19:56:16.885247 | orchestrator | Tuesday 13 May 2025 19:56:16 +0000 (0:00:00.152) 0:01:09.437 *********** 2025-05-13 19:56:17.420300 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:56:17.420394 | orchestrator | 2025-05-13 19:56:17.420405 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-13 19:56:17.420459 | orchestrator | Tuesday 13 May 2025 19:56:17 +0000 (0:00:00.536) 0:01:09.973 *********** 2025-05-13 19:56:17.909914 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:56:17.910241 | orchestrator | 2025-05-13 19:56:17.911277 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-13 19:56:17.912116 | orchestrator | Tuesday 13 May 2025 19:56:17 +0000 (0:00:00.494) 0:01:10.468 *********** 2025-05-13 19:56:18.056470 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:56:18.056658 | orchestrator | 2025-05-13 19:56:18.058208 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-13 19:56:18.058366 | orchestrator | Tuesday 13 May 2025 19:56:18 +0000 (0:00:00.147) 0:01:10.615 *********** 2025-05-13 19:56:18.239747 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-6f4317e9-8e5a-55d6-81df-460521249898', 'vg_name': 'ceph-6f4317e9-8e5a-55d6-81df-460521249898'}) 2025-05-13 19:56:18.240266 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-9e27190a-cad1-5451-a880-ae60fcff608c', 'vg_name': 'ceph-9e27190a-cad1-5451-a880-ae60fcff608c'}) 2025-05-13 19:56:18.241627 | orchestrator | 2025-05-13 19:56:18.243524 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-13 19:56:18.243721 | orchestrator | Tuesday 13 May 2025 19:56:18 +0000 (0:00:00.182) 0:01:10.797 *********** 2025-05-13 19:56:18.378092 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9e27190a-cad1-5451-a880-ae60fcff608c', 'data_vg': 'ceph-9e27190a-cad1-5451-a880-ae60fcff608c'})  2025-05-13 19:56:18.379271 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f4317e9-8e5a-55d6-81df-460521249898', 'data_vg': 'ceph-6f4317e9-8e5a-55d6-81df-460521249898'})  2025-05-13 19:56:18.380573 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:18.381238 | orchestrator | 2025-05-13 19:56:18.382105 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-13 19:56:18.382808 | orchestrator | Tuesday 13 May 2025 19:56:18 +0000 (0:00:00.138) 0:01:10.935 *********** 2025-05-13 19:56:18.527275 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9e27190a-cad1-5451-a880-ae60fcff608c', 'data_vg': 'ceph-9e27190a-cad1-5451-a880-ae60fcff608c'})  2025-05-13 19:56:18.528188 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f4317e9-8e5a-55d6-81df-460521249898', 'data_vg': 'ceph-6f4317e9-8e5a-55d6-81df-460521249898'})  2025-05-13 19:56:18.529424 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:18.530160 | orchestrator | 2025-05-13 19:56:18.530832 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-13 19:56:18.532131 | orchestrator | Tuesday 13 May 2025 19:56:18 +0000 (0:00:00.149) 0:01:11.085 *********** 2025-05-13 19:56:18.680148 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9e27190a-cad1-5451-a880-ae60fcff608c', 'data_vg': 'ceph-9e27190a-cad1-5451-a880-ae60fcff608c'})  2025-05-13 19:56:18.680361 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6f4317e9-8e5a-55d6-81df-460521249898', 'data_vg': 'ceph-6f4317e9-8e5a-55d6-81df-460521249898'})  2025-05-13 19:56:18.680780 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:18.681355 | orchestrator | 2025-05-13 19:56:18.681714 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-13 19:56:18.682350 | orchestrator | Tuesday 13 May 2025 19:56:18 +0000 (0:00:00.154) 0:01:11.239 *********** 2025-05-13 19:56:18.833553 | orchestrator | ok: [testbed-node-5] => { 2025-05-13 19:56:18.836403 | orchestrator |  "lvm_report": { 2025-05-13 19:56:18.836878 | orchestrator |  "lv": [ 2025-05-13 19:56:18.837455 | orchestrator |  { 2025-05-13 19:56:18.838176 | orchestrator |  "lv_name": "osd-block-6f4317e9-8e5a-55d6-81df-460521249898", 2025-05-13 19:56:18.838483 | orchestrator |  "vg_name": "ceph-6f4317e9-8e5a-55d6-81df-460521249898" 2025-05-13 19:56:18.839733 | orchestrator |  }, 2025-05-13 19:56:18.840391 | orchestrator |  { 2025-05-13 19:56:18.841027 | orchestrator |  "lv_name": "osd-block-9e27190a-cad1-5451-a880-ae60fcff608c", 2025-05-13 19:56:18.841805 | orchestrator |  "vg_name": "ceph-9e27190a-cad1-5451-a880-ae60fcff608c" 2025-05-13 19:56:18.842439 | orchestrator |  } 2025-05-13 19:56:18.843820 | orchestrator |  ], 2025-05-13 19:56:18.844614 | orchestrator |  "pv": [ 2025-05-13 19:56:18.847327 | orchestrator |  { 2025-05-13 19:56:18.847481 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-13 19:56:18.847679 | orchestrator |  "vg_name": "ceph-9e27190a-cad1-5451-a880-ae60fcff608c" 2025-05-13 19:56:18.848044 | orchestrator |  }, 2025-05-13 19:56:18.848345 | orchestrator |  { 2025-05-13 19:56:18.848766 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-13 19:56:18.849087 | orchestrator |  "vg_name": "ceph-6f4317e9-8e5a-55d6-81df-460521249898" 2025-05-13 19:56:18.849413 | orchestrator |  } 2025-05-13 19:56:18.849863 | orchestrator |  ] 2025-05-13 19:56:18.850303 | orchestrator |  } 2025-05-13 19:56:18.850595 | orchestrator | } 2025-05-13 19:56:18.850970 | orchestrator | 2025-05-13 19:56:18.851205 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:56:18.851494 | orchestrator | 2025-05-13 19:56:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:56:18.851633 | orchestrator | 2025-05-13 19:56:18 | INFO  | Please wait and do not abort execution. 2025-05-13 19:56:18.852204 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-13 19:56:18.852574 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-13 19:56:18.854927 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-13 19:56:18.855081 | orchestrator | 2025-05-13 19:56:18.855110 | orchestrator | 2025-05-13 19:56:18.855254 | orchestrator | 2025-05-13 19:56:18.855780 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 19:56:18.856109 | orchestrator | Tuesday 13 May 2025 19:56:18 +0000 (0:00:00.152) 0:01:11.392 *********** 2025-05-13 19:56:18.856533 | orchestrator | =============================================================================== 2025-05-13 19:56:18.857383 | orchestrator | Create block VGs -------------------------------------------------------- 5.64s 2025-05-13 19:56:18.858008 | orchestrator | Create block LVs -------------------------------------------------------- 3.98s 2025-05-13 19:56:18.858558 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.89s 2025-05-13 19:56:18.859000 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.72s 2025-05-13 19:56:18.859214 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.55s 2025-05-13 19:56:18.859816 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.53s 2025-05-13 19:56:18.860240 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.51s 2025-05-13 19:56:18.860765 | orchestrator | Add known partitions to the list of available block devices ------------- 1.47s 2025-05-13 19:56:18.861201 | orchestrator | Add known links to the list of available block devices ------------------ 1.27s 2025-05-13 19:56:18.861760 | orchestrator | Add known partitions to the list of available block devices ------------- 1.08s 2025-05-13 19:56:18.862212 | orchestrator | Print LVM report data --------------------------------------------------- 0.92s 2025-05-13 19:56:18.863007 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2025-05-13 19:56:18.863149 | orchestrator | Add known links to the list of available block devices ------------------ 0.85s 2025-05-13 19:56:18.863705 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.74s 2025-05-13 19:56:18.864199 | orchestrator | Get initial list of available block devices ----------------------------- 0.71s 2025-05-13 19:56:18.864673 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.71s 2025-05-13 19:56:18.865099 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.68s 2025-05-13 19:56:18.865477 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.67s 2025-05-13 19:56:18.866128 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2025-05-13 19:56:18.866479 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.65s 2025-05-13 19:56:21.246446 | orchestrator | 2025-05-13 19:56:21 | INFO  | Task 7e6e93fa-9af3-44f4-a4b1-52e67fd2b112 (facts) was prepared for execution. 2025-05-13 19:56:21.246564 | orchestrator | 2025-05-13 19:56:21 | INFO  | It takes a moment until task 7e6e93fa-9af3-44f4-a4b1-52e67fd2b112 (facts) has been started and output is visible here. 2025-05-13 19:56:27.173557 | orchestrator | 2025-05-13 19:56:27.173670 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-13 19:56:27.175174 | orchestrator | 2025-05-13 19:56:27.175647 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-13 19:56:27.176743 | orchestrator | Tuesday 13 May 2025 19:56:27 +0000 (0:00:01.740) 0:00:01.740 *********** 2025-05-13 19:56:27.830443 | orchestrator | ok: [testbed-manager] 2025-05-13 19:56:29.792826 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:56:29.793330 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:56:29.794405 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:56:29.795176 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:56:29.797530 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:56:29.797558 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:56:29.797570 | orchestrator | 2025-05-13 19:56:29.799164 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-13 19:56:29.800042 | orchestrator | Tuesday 13 May 2025 19:56:29 +0000 (0:00:02.613) 0:00:04.354 *********** 2025-05-13 19:56:29.966138 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:56:30.053405 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:56:30.147197 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:56:30.239058 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:56:30.325239 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:56:32.082412 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:56:32.083123 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:32.085355 | orchestrator | 2025-05-13 19:56:32.086742 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-13 19:56:32.088228 | orchestrator | 2025-05-13 19:56:32.090494 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-13 19:56:32.091354 | orchestrator | Tuesday 13 May 2025 19:56:32 +0000 (0:00:02.299) 0:00:06.653 *********** 2025-05-13 19:56:38.730724 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:56:38.731335 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:56:38.733532 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:56:38.733565 | orchestrator | ok: [testbed-manager] 2025-05-13 19:56:38.734782 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:56:38.735852 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:56:38.736709 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:56:38.737519 | orchestrator | 2025-05-13 19:56:38.738123 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-13 19:56:38.738828 | orchestrator | 2025-05-13 19:56:38.739543 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-13 19:56:38.740217 | orchestrator | Tuesday 13 May 2025 19:56:38 +0000 (0:00:06.647) 0:00:13.301 *********** 2025-05-13 19:56:38.906206 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:56:39.002689 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:56:39.109968 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:56:39.223620 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:56:39.304563 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:56:41.093130 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:56:41.093257 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:56:41.093317 | orchestrator | 2025-05-13 19:56:41.093592 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:56:41.095788 | orchestrator | 2025-05-13 19:56:41 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 19:56:41.095835 | orchestrator | 2025-05-13 19:56:41 | INFO  | Please wait and do not abort execution. 2025-05-13 19:56:41.095924 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:56:41.096217 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:56:41.097626 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:56:41.097921 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:56:41.098256 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:56:41.099088 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:56:41.100750 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 19:56:41.100772 | orchestrator | 2025-05-13 19:56:41.101395 | orchestrator | 2025-05-13 19:56:41.101969 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 19:56:41.103026 | orchestrator | Tuesday 13 May 2025 19:56:41 +0000 (0:00:02.362) 0:00:15.663 *********** 2025-05-13 19:56:41.103317 | orchestrator | =============================================================================== 2025-05-13 19:56:41.103667 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.65s 2025-05-13 19:56:41.104085 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 2.61s 2025-05-13 19:56:41.104707 | orchestrator | Gather facts for all hosts ---------------------------------------------- 2.36s 2025-05-13 19:56:41.104801 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 2.30s 2025-05-13 19:56:41.804557 | orchestrator | 2025-05-13 19:56:41.806072 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Tue May 13 19:56:41 UTC 2025 2025-05-13 19:56:41.806105 | orchestrator | 2025-05-13 19:56:43.478009 | orchestrator | 2025-05-13 19:56:43 | INFO  | Collection nutshell is prepared for execution 2025-05-13 19:56:43.478201 | orchestrator | 2025-05-13 19:56:43 | INFO  | D [0] - dotfiles 2025-05-13 19:56:43.483708 | orchestrator | 2025-05-13 19:56:43 | INFO  | D [0] - homer 2025-05-13 19:56:43.483754 | orchestrator | 2025-05-13 19:56:43 | INFO  | D [0] - netdata 2025-05-13 19:56:43.483766 | orchestrator | 2025-05-13 19:56:43 | INFO  | D [0] - openstackclient 2025-05-13 19:56:43.483778 | orchestrator | 2025-05-13 19:56:43 | INFO  | D [0] - phpmyadmin 2025-05-13 19:56:43.483789 | orchestrator | 2025-05-13 19:56:43 | INFO  | A [0] - common 2025-05-13 19:56:43.485684 | orchestrator | 2025-05-13 19:56:43 | INFO  | A [1] -- loadbalancer 2025-05-13 19:56:43.485709 | orchestrator | 2025-05-13 19:56:43 | INFO  | D [2] --- opensearch 2025-05-13 19:56:43.485721 | orchestrator | 2025-05-13 19:56:43 | INFO  | A [2] --- mariadb-ng 2025-05-13 19:56:43.485732 | orchestrator | 2025-05-13 19:56:43 | INFO  | D [3] ---- horizon 2025-05-13 19:56:43.485743 | orchestrator | 2025-05-13 19:56:43 | INFO  | A [3] ---- keystone 2025-05-13 19:56:43.485763 | orchestrator | 2025-05-13 19:56:43 | INFO  | A [4] ----- neutron 2025-05-13 19:56:43.485854 | orchestrator | 2025-05-13 19:56:43 | INFO  | D [5] ------ wait-for-nova 2025-05-13 19:56:43.485902 | orchestrator | 2025-05-13 19:56:43 | INFO  | A [5] ------ octavia 2025-05-13 19:56:43.485998 | orchestrator | 2025-05-13 19:56:43 | INFO  | D [4] ----- barbican 2025-05-13 19:56:43.486013 | orchestrator | 2025-05-13 19:56:43 | INFO  | D [4] ----- designate 2025-05-13 19:56:43.486201 | orchestrator | 2025-05-13 19:56:43 | INFO  | D [4] ----- ironic 2025-05-13 19:56:43.486219 | orchestrator | 2025-05-13 19:56:43 | INFO  | D [4] ----- placement 2025-05-13 19:56:43.486275 | orchestrator | 2025-05-13 19:56:43 | INFO  | D [4] ----- magnum 2025-05-13 19:56:43.486712 | orchestrator | 2025-05-13 19:56:43 | INFO  | A [1] -- openvswitch 2025-05-13 19:56:43.486733 | orchestrator | 2025-05-13 19:56:43 | INFO  | D [2] --- ovn 2025-05-13 19:56:43.486848 | orchestrator | 2025-05-13 19:56:43 | INFO  | D [1] -- memcached 2025-05-13 19:56:43.486864 | orchestrator | 2025-05-13 19:56:43 | INFO  | D [1] -- redis 2025-05-13 19:56:43.486955 | orchestrator | 2025-05-13 19:56:43 | INFO  | D [1] -- rabbitmq-ng 2025-05-13 19:56:43.487177 | orchestrator | 2025-05-13 19:56:43 | INFO  | A [0] - kubernetes 2025-05-13 19:56:43.488695 | orchestrator | 2025-05-13 19:56:43 | INFO  | D [1] -- kubeconfig 2025-05-13 19:56:43.488732 | orchestrator | 2025-05-13 19:56:43 | INFO  | A [1] -- copy-kubeconfig 2025-05-13 19:56:43.488812 | orchestrator | 2025-05-13 19:56:43 | INFO  | A [0] - ceph 2025-05-13 19:56:43.490392 | orchestrator | 2025-05-13 19:56:43 | INFO  | A [1] -- ceph-pools 2025-05-13 19:56:43.490417 | orchestrator | 2025-05-13 19:56:43 | INFO  | A [2] --- copy-ceph-keys 2025-05-13 19:56:43.490429 | orchestrator | 2025-05-13 19:56:43 | INFO  | A [3] ---- cephclient 2025-05-13 19:56:43.490852 | orchestrator | 2025-05-13 19:56:43 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-05-13 19:56:43.490899 | orchestrator | 2025-05-13 19:56:43 | INFO  | A [4] ----- wait-for-keystone 2025-05-13 19:56:43.490912 | orchestrator | 2025-05-13 19:56:43 | INFO  | D [5] ------ kolla-ceph-rgw 2025-05-13 19:56:43.490923 | orchestrator | 2025-05-13 19:56:43 | INFO  | D [5] ------ glance 2025-05-13 19:56:43.490934 | orchestrator | 2025-05-13 19:56:43 | INFO  | D [5] ------ cinder 2025-05-13 19:56:43.490944 | orchestrator | 2025-05-13 19:56:43 | INFO  | D [5] ------ nova 2025-05-13 19:56:43.491159 | orchestrator | 2025-05-13 19:56:43 | INFO  | A [4] ----- prometheus 2025-05-13 19:56:43.491180 | orchestrator | 2025-05-13 19:56:43 | INFO  | D [5] ------ grafana 2025-05-13 19:56:43.680565 | orchestrator | 2025-05-13 19:56:43 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-05-13 19:56:43.680680 | orchestrator | 2025-05-13 19:56:43 | INFO  | Tasks are running in the background 2025-05-13 19:56:46.466904 | orchestrator | 2025-05-13 19:56:46 | INFO  | No task IDs specified, wait for all currently running tasks 2025-05-13 19:56:48.588766 | orchestrator | 2025-05-13 19:56:48 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:56:48.589001 | orchestrator | 2025-05-13 19:56:48 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:56:48.593775 | orchestrator | 2025-05-13 19:56:48 | INFO  | Task 70d5b5a4-48ed-44e1-8faa-2d2c14457e2b is in state STARTED 2025-05-13 19:56:48.594103 | orchestrator | 2025-05-13 19:56:48 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:56:48.596614 | orchestrator | 2025-05-13 19:56:48 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:56:48.596962 | orchestrator | 2025-05-13 19:56:48 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:56:48.597429 | orchestrator | 2025-05-13 19:56:48 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:56:48.597452 | orchestrator | 2025-05-13 19:56:48 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:56:51.642511 | orchestrator | 2025-05-13 19:56:51 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:56:51.642613 | orchestrator | 2025-05-13 19:56:51 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:56:51.647558 | orchestrator | 2025-05-13 19:56:51 | INFO  | Task 70d5b5a4-48ed-44e1-8faa-2d2c14457e2b is in state STARTED 2025-05-13 19:56:51.653937 | orchestrator | 2025-05-13 19:56:51 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:56:51.656391 | orchestrator | 2025-05-13 19:56:51 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:56:51.656449 | orchestrator | 2025-05-13 19:56:51 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:56:51.657361 | orchestrator | 2025-05-13 19:56:51 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:56:51.657385 | orchestrator | 2025-05-13 19:56:51 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:56:54.702473 | orchestrator | 2025-05-13 19:56:54 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:56:54.702660 | orchestrator | 2025-05-13 19:56:54 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:56:54.702948 | orchestrator | 2025-05-13 19:56:54 | INFO  | Task 70d5b5a4-48ed-44e1-8faa-2d2c14457e2b is in state STARTED 2025-05-13 19:56:54.703357 | orchestrator | 2025-05-13 19:56:54 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:56:54.703975 | orchestrator | 2025-05-13 19:56:54 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:56:54.704311 | orchestrator | 2025-05-13 19:56:54 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:56:54.704798 | orchestrator | 2025-05-13 19:56:54 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:56:54.707325 | orchestrator | 2025-05-13 19:56:54 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:56:57.749570 | orchestrator | 2025-05-13 19:56:57 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:56:57.750181 | orchestrator | 2025-05-13 19:56:57 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:56:57.756452 | orchestrator | 2025-05-13 19:56:57 | INFO  | Task 70d5b5a4-48ed-44e1-8faa-2d2c14457e2b is in state STARTED 2025-05-13 19:56:57.760263 | orchestrator | 2025-05-13 19:56:57 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:56:57.763379 | orchestrator | 2025-05-13 19:56:57 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:56:57.763407 | orchestrator | 2025-05-13 19:56:57 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:56:57.763419 | orchestrator | 2025-05-13 19:56:57 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:56:57.763430 | orchestrator | 2025-05-13 19:56:57 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:57:00.809656 | orchestrator | 2025-05-13 19:57:00 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:57:00.810209 | orchestrator | 2025-05-13 19:57:00 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:57:00.810554 | orchestrator | 2025-05-13 19:57:00 | INFO  | Task 70d5b5a4-48ed-44e1-8faa-2d2c14457e2b is in state STARTED 2025-05-13 19:57:00.811111 | orchestrator | 2025-05-13 19:57:00 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:57:00.814421 | orchestrator | 2025-05-13 19:57:00 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:57:00.816512 | orchestrator | 2025-05-13 19:57:00 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:57:00.817656 | orchestrator | 2025-05-13 19:57:00 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:57:00.817679 | orchestrator | 2025-05-13 19:57:00 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:57:03.856308 | orchestrator | 2025-05-13 19:57:03 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:57:03.857190 | orchestrator | 2025-05-13 19:57:03 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:57:03.859296 | orchestrator | 2025-05-13 19:57:03 | INFO  | Task 70d5b5a4-48ed-44e1-8faa-2d2c14457e2b is in state STARTED 2025-05-13 19:57:03.860442 | orchestrator | 2025-05-13 19:57:03 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:57:03.861309 | orchestrator | 2025-05-13 19:57:03 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:57:03.862562 | orchestrator | 2025-05-13 19:57:03 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:57:03.863344 | orchestrator | 2025-05-13 19:57:03 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:57:03.863482 | orchestrator | 2025-05-13 19:57:03 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:57:06.907663 | orchestrator | 2025-05-13 19:57:06 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:57:06.908638 | orchestrator | 2025-05-13 19:57:06 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:57:06.912044 | orchestrator | 2025-05-13 19:57:06 | INFO  | Task 70d5b5a4-48ed-44e1-8faa-2d2c14457e2b is in state STARTED 2025-05-13 19:57:06.912632 | orchestrator | 2025-05-13 19:57:06 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:57:06.913265 | orchestrator | 2025-05-13 19:57:06 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:57:06.916976 | orchestrator | 2025-05-13 19:57:06 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:57:06.917492 | orchestrator | 2025-05-13 19:57:06 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:57:06.920264 | orchestrator | 2025-05-13 19:57:06 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:57:09.949549 | orchestrator | 2025-05-13 19:57:09 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:57:09.953205 | orchestrator | 2025-05-13 19:57:09 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:57:09.953262 | orchestrator | 2025-05-13 19:57:09 | INFO  | Task 70d5b5a4-48ed-44e1-8faa-2d2c14457e2b is in state STARTED 2025-05-13 19:57:09.953664 | orchestrator | 2025-05-13 19:57:09 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:57:09.954275 | orchestrator | 2025-05-13 19:57:09 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:57:09.958172 | orchestrator | 2025-05-13 19:57:09 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:57:09.958504 | orchestrator | 2025-05-13 19:57:09 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:57:09.958568 | orchestrator | 2025-05-13 19:57:09 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:57:13.003016 | orchestrator | 2025-05-13 19:57:12 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:57:13.003123 | orchestrator | 2025-05-13 19:57:12 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:57:13.003138 | orchestrator | 2025-05-13 19:57:12 | INFO  | Task 70d5b5a4-48ed-44e1-8faa-2d2c14457e2b is in state STARTED 2025-05-13 19:57:13.003148 | orchestrator | 2025-05-13 19:57:12 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:57:13.003158 | orchestrator | 2025-05-13 19:57:13 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:57:13.008972 | orchestrator | 2025-05-13 19:57:13 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:57:13.009056 | orchestrator | 2025-05-13 19:57:13 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:57:13.009071 | orchestrator | 2025-05-13 19:57:13 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:57:16.049883 | orchestrator | 2025-05-13 19:57:16 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:57:16.050239 | orchestrator | 2025-05-13 19:57:16 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:57:16.050311 | orchestrator | 2025-05-13 19:57:16 | INFO  | Task 70d5b5a4-48ed-44e1-8faa-2d2c14457e2b is in state STARTED 2025-05-13 19:57:16.050687 | orchestrator | 2025-05-13 19:57:16 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:57:16.054434 | orchestrator | 2025-05-13 19:57:16 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:57:16.055643 | orchestrator | 2025-05-13 19:57:16 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:57:16.058109 | orchestrator | 2025-05-13 19:57:16 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:57:16.058136 | orchestrator | 2025-05-13 19:57:16 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:57:19.103695 | orchestrator | 2025-05-13 19:57:19 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:57:19.104771 | orchestrator | 2025-05-13 19:57:19 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:57:19.104835 | orchestrator | 2025-05-13 19:57:19 | INFO  | Task 70d5b5a4-48ed-44e1-8faa-2d2c14457e2b is in state STARTED 2025-05-13 19:57:19.104866 | orchestrator | 2025-05-13 19:57:19 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:57:19.107630 | orchestrator | 2025-05-13 19:57:19 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:57:19.107655 | orchestrator | 2025-05-13 19:57:19 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:57:19.112200 | orchestrator | 2025-05-13 19:57:19 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:57:19.112224 | orchestrator | 2025-05-13 19:57:19 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:57:22.158701 | orchestrator | 2025-05-13 19:57:22 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:57:22.162577 | orchestrator | 2025-05-13 19:57:22 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:57:22.162853 | orchestrator | 2025-05-13 19:57:22 | INFO  | Task 70d5b5a4-48ed-44e1-8faa-2d2c14457e2b is in state STARTED 2025-05-13 19:57:22.163373 | orchestrator | 2025-05-13 19:57:22 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:57:22.163818 | orchestrator | 2025-05-13 19:57:22 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:57:22.164359 | orchestrator | 2025-05-13 19:57:22 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:57:22.164817 | orchestrator | 2025-05-13 19:57:22 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:57:22.164841 | orchestrator | 2025-05-13 19:57:22 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:57:25.219674 | orchestrator | 2025-05-13 19:57:25 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:57:25.219818 | orchestrator | 2025-05-13 19:57:25 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:57:25.219834 | orchestrator | 2025-05-13 19:57:25 | INFO  | Task 70d5b5a4-48ed-44e1-8faa-2d2c14457e2b is in state STARTED 2025-05-13 19:57:25.219843 | orchestrator | 2025-05-13 19:57:25 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:57:25.224409 | orchestrator | 2025-05-13 19:57:25 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:57:25.224444 | orchestrator | 2025-05-13 19:57:25 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:57:25.224456 | orchestrator | 2025-05-13 19:57:25 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:57:25.224469 | orchestrator | 2025-05-13 19:57:25 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:57:28.260926 | orchestrator | 2025-05-13 19:57:28 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:57:28.261124 | orchestrator | 2025-05-13 19:57:28 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:57:28.262354 | orchestrator | 2025-05-13 19:57:28 | INFO  | Task 70d5b5a4-48ed-44e1-8faa-2d2c14457e2b is in state STARTED 2025-05-13 19:57:28.262402 | orchestrator | 2025-05-13 19:57:28 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:57:28.267415 | orchestrator | 2025-05-13 19:57:28 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:57:28.267468 | orchestrator | 2025-05-13 19:57:28 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:57:28.267480 | orchestrator | 2025-05-13 19:57:28 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:57:28.267491 | orchestrator | 2025-05-13 19:57:28 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:57:31.302372 | orchestrator | 2025-05-13 19:57:31 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:57:31.306472 | orchestrator | 2025-05-13 19:57:31 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:57:31.306569 | orchestrator | 2025-05-13 19:57:31 | INFO  | Task 70d5b5a4-48ed-44e1-8faa-2d2c14457e2b is in state STARTED 2025-05-13 19:57:31.306586 | orchestrator | 2025-05-13 19:57:31 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:57:31.306662 | orchestrator | 2025-05-13 19:57:31 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:57:31.308449 | orchestrator | 2025-05-13 19:57:31 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:57:31.309006 | orchestrator | 2025-05-13 19:57:31 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:57:31.309374 | orchestrator | 2025-05-13 19:57:31 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:57:34.368851 | orchestrator | 2025-05-13 19:57:34 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:57:34.372853 | orchestrator | 2025-05-13 19:57:34 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:57:34.373006 | orchestrator | 2025-05-13 19:57:34 | INFO  | Task 70d5b5a4-48ed-44e1-8faa-2d2c14457e2b is in state STARTED 2025-05-13 19:57:34.375860 | orchestrator | 2025-05-13 19:57:34 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:57:34.381963 | orchestrator | 2025-05-13 19:57:34 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:57:34.385074 | orchestrator | 2025-05-13 19:57:34 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:57:34.386379 | orchestrator | 2025-05-13 19:57:34 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:57:34.386410 | orchestrator | 2025-05-13 19:57:34 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:57:37.438599 | orchestrator | 2025-05-13 19:57:37 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:57:37.438841 | orchestrator | 2025-05-13 19:57:37 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:57:37.442289 | orchestrator | 2025-05-13 19:57:37 | INFO  | Task 70d5b5a4-48ed-44e1-8faa-2d2c14457e2b is in state STARTED 2025-05-13 19:57:37.442340 | orchestrator | 2025-05-13 19:57:37 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:57:37.442360 | orchestrator | 2025-05-13 19:57:37 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:57:37.442378 | orchestrator | 2025-05-13 19:57:37 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:57:37.443034 | orchestrator | 2025-05-13 19:57:37 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:57:37.443538 | orchestrator | 2025-05-13 19:57:37 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:57:40.491067 | orchestrator | 2025-05-13 19:57:40 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:57:40.491217 | orchestrator | 2025-05-13 19:57:40 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:57:40.492202 | orchestrator | 2025-05-13 19:57:40 | INFO  | Task 70d5b5a4-48ed-44e1-8faa-2d2c14457e2b is in state STARTED 2025-05-13 19:57:40.494183 | orchestrator | 2025-05-13 19:57:40 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:57:40.497231 | orchestrator | 2025-05-13 19:57:40 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:57:40.498785 | orchestrator | 2025-05-13 19:57:40 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:57:40.500722 | orchestrator | 2025-05-13 19:57:40 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:57:40.500745 | orchestrator | 2025-05-13 19:57:40 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:57:43.544846 | orchestrator | 2025-05-13 19:57:43 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:57:43.545612 | orchestrator | 2025-05-13 19:57:43 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:57:43.547667 | orchestrator | 2025-05-13 19:57:43 | INFO  | Task 70d5b5a4-48ed-44e1-8faa-2d2c14457e2b is in state SUCCESS 2025-05-13 19:57:43.548033 | orchestrator | 2025-05-13 19:57:43.548076 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-05-13 19:57:43.548090 | orchestrator | 2025-05-13 19:57:43.548101 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-05-13 19:57:43.548112 | orchestrator | Tuesday 13 May 2025 19:57:09 +0000 (0:00:13.569) 0:00:13.569 *********** 2025-05-13 19:57:43.548124 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:57:43.548143 | orchestrator | changed: [testbed-manager] 2025-05-13 19:57:43.548161 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:57:43.548180 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:57:43.548200 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:57:43.548217 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:57:43.548234 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:57:43.548246 | orchestrator | 2025-05-13 19:57:43.548257 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-05-13 19:57:43.548268 | orchestrator | Tuesday 13 May 2025 19:57:18 +0000 (0:00:08.548) 0:00:22.117 *********** 2025-05-13 19:57:43.548279 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-13 19:57:43.548295 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-13 19:57:43.548306 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-13 19:57:43.548316 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-13 19:57:43.548327 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-13 19:57:43.548337 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-13 19:57:43.548347 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-13 19:57:43.548358 | orchestrator | 2025-05-13 19:57:43.548369 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-05-13 19:57:43.548379 | orchestrator | Tuesday 13 May 2025 19:57:22 +0000 (0:00:03.824) 0:00:25.944 *********** 2025-05-13 19:57:43.548393 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-13 19:57:18.953810', 'end': '2025-05-13 19:57:18.962624', 'delta': '0:00:00.008814', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-13 19:57:43.548413 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-13 19:57:18.937153', 'end': '2025-05-13 19:57:18.940505', 'delta': '0:00:00.003352', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-13 19:57:43.548426 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-13 19:57:19.089684', 'end': '2025-05-13 19:57:19.095660', 'delta': '0:00:00.005976', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-13 19:57:43.548475 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-13 19:57:19.344771', 'end': '2025-05-13 19:57:19.353496', 'delta': '0:00:00.008725', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-13 19:57:43.548493 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-13 19:57:19.584651', 'end': '2025-05-13 19:57:19.593541', 'delta': '0:00:00.008890', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-13 19:57:43.548512 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-13 19:57:19.758367', 'end': '2025-05-13 19:57:19.766910', 'delta': '0:00:00.008543', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-13 19:57:43.548532 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-13 19:57:19.907092', 'end': '2025-05-13 19:57:19.915316', 'delta': '0:00:00.008224', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-13 19:57:43.548562 | orchestrator | 2025-05-13 19:57:43.548581 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-05-13 19:57:43.548600 | orchestrator | Tuesday 13 May 2025 19:57:27 +0000 (0:00:05.939) 0:00:31.884 *********** 2025-05-13 19:57:43.548628 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-13 19:57:43.548650 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-13 19:57:43.548667 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-13 19:57:43.548683 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-13 19:57:43.548700 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-13 19:57:43.548718 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-13 19:57:43.548769 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-13 19:57:43.548787 | orchestrator | 2025-05-13 19:57:43.548807 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-05-13 19:57:43.548827 | orchestrator | Tuesday 13 May 2025 19:57:31 +0000 (0:00:03.810) 0:00:35.695 *********** 2025-05-13 19:57:43.548845 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-05-13 19:57:43.548864 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-05-13 19:57:43.548884 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-05-13 19:57:43.548903 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-05-13 19:57:43.548922 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-05-13 19:57:43.548935 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-05-13 19:57:43.548947 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-05-13 19:57:43.548959 | orchestrator | 2025-05-13 19:57:43.548972 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:57:43.548995 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:57:43.549008 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:57:43.549019 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:57:43.549030 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:57:43.549041 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:57:43.549059 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:57:43.549070 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:57:43.549081 | orchestrator | 2025-05-13 19:57:43.549091 | orchestrator | 2025-05-13 19:57:43.549102 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 19:57:43.549113 | orchestrator | Tuesday 13 May 2025 19:57:40 +0000 (0:00:08.335) 0:00:44.030 *********** 2025-05-13 19:57:43.549124 | orchestrator | =============================================================================== 2025-05-13 19:57:43.549134 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 8.55s 2025-05-13 19:57:43.549145 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 8.34s 2025-05-13 19:57:43.549155 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 5.94s 2025-05-13 19:57:43.549166 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 3.83s 2025-05-13 19:57:43.549177 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 3.81s 2025-05-13 19:57:43.550289 | orchestrator | 2025-05-13 19:57:43 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:57:43.551427 | orchestrator | 2025-05-13 19:57:43 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:57:43.554746 | orchestrator | 2025-05-13 19:57:43 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:57:43.554774 | orchestrator | 2025-05-13 19:57:43 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:57:43.558302 | orchestrator | 2025-05-13 19:57:43 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:57:43.558326 | orchestrator | 2025-05-13 19:57:43 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:57:46.612607 | orchestrator | 2025-05-13 19:57:46 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:57:46.614339 | orchestrator | 2025-05-13 19:57:46 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:57:46.616130 | orchestrator | 2025-05-13 19:57:46 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:57:46.620844 | orchestrator | 2025-05-13 19:57:46 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:57:46.620919 | orchestrator | 2025-05-13 19:57:46 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:57:46.622245 | orchestrator | 2025-05-13 19:57:46 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:57:46.629253 | orchestrator | 2025-05-13 19:57:46 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:57:46.629305 | orchestrator | 2025-05-13 19:57:46 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:57:49.709952 | orchestrator | 2025-05-13 19:57:49 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:57:49.715115 | orchestrator | 2025-05-13 19:57:49 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:57:49.722245 | orchestrator | 2025-05-13 19:57:49 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:57:49.727601 | orchestrator | 2025-05-13 19:57:49 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:57:49.727780 | orchestrator | 2025-05-13 19:57:49 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:57:49.732147 | orchestrator | 2025-05-13 19:57:49 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:57:49.733010 | orchestrator | 2025-05-13 19:57:49 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:57:49.733049 | orchestrator | 2025-05-13 19:57:49 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:57:52.802571 | orchestrator | 2025-05-13 19:57:52 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:57:52.802648 | orchestrator | 2025-05-13 19:57:52 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:57:52.803516 | orchestrator | 2025-05-13 19:57:52 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:57:52.804529 | orchestrator | 2025-05-13 19:57:52 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:57:52.816223 | orchestrator | 2025-05-13 19:57:52 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:57:52.818785 | orchestrator | 2025-05-13 19:57:52 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:57:52.824192 | orchestrator | 2025-05-13 19:57:52 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:57:52.824262 | orchestrator | 2025-05-13 19:57:52 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:57:55.882254 | orchestrator | 2025-05-13 19:57:55 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:57:55.884953 | orchestrator | 2025-05-13 19:57:55 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:57:55.885015 | orchestrator | 2025-05-13 19:57:55 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:57:55.889452 | orchestrator | 2025-05-13 19:57:55 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:57:55.889499 | orchestrator | 2025-05-13 19:57:55 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:57:55.889512 | orchestrator | 2025-05-13 19:57:55 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:57:55.890259 | orchestrator | 2025-05-13 19:57:55 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:57:55.890286 | orchestrator | 2025-05-13 19:57:55 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:57:58.954220 | orchestrator | 2025-05-13 19:57:58 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:57:58.954802 | orchestrator | 2025-05-13 19:57:58 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:57:58.957354 | orchestrator | 2025-05-13 19:57:58 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:57:58.959739 | orchestrator | 2025-05-13 19:57:58 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:57:58.962097 | orchestrator | 2025-05-13 19:57:58 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:57:58.964427 | orchestrator | 2025-05-13 19:57:58 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:57:58.966367 | orchestrator | 2025-05-13 19:57:58 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:57:58.966887 | orchestrator | 2025-05-13 19:57:58 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:58:02.051233 | orchestrator | 2025-05-13 19:58:02 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:58:02.053283 | orchestrator | 2025-05-13 19:58:02 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:58:02.059021 | orchestrator | 2025-05-13 19:58:02 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:58:02.062390 | orchestrator | 2025-05-13 19:58:02 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:58:02.065470 | orchestrator | 2025-05-13 19:58:02 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:58:02.067518 | orchestrator | 2025-05-13 19:58:02 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:58:02.070193 | orchestrator | 2025-05-13 19:58:02 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:58:02.070239 | orchestrator | 2025-05-13 19:58:02 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:58:05.142221 | orchestrator | 2025-05-13 19:58:05 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:58:05.151881 | orchestrator | 2025-05-13 19:58:05 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:58:05.169118 | orchestrator | 2025-05-13 19:58:05 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:58:05.171269 | orchestrator | 2025-05-13 19:58:05 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:58:05.184308 | orchestrator | 2025-05-13 19:58:05 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:58:05.184364 | orchestrator | 2025-05-13 19:58:05 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:58:05.188251 | orchestrator | 2025-05-13 19:58:05 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:58:05.188306 | orchestrator | 2025-05-13 19:58:05 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:58:08.284085 | orchestrator | 2025-05-13 19:58:08 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:58:08.284171 | orchestrator | 2025-05-13 19:58:08 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:58:08.284238 | orchestrator | 2025-05-13 19:58:08 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:58:08.285237 | orchestrator | 2025-05-13 19:58:08 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:58:08.287389 | orchestrator | 2025-05-13 19:58:08 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:58:08.290125 | orchestrator | 2025-05-13 19:58:08 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state STARTED 2025-05-13 19:58:08.292640 | orchestrator | 2025-05-13 19:58:08 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:58:08.292710 | orchestrator | 2025-05-13 19:58:08 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:58:11.355866 | orchestrator | 2025-05-13 19:58:11 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:58:11.357931 | orchestrator | 2025-05-13 19:58:11 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:58:11.364868 | orchestrator | 2025-05-13 19:58:11 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:58:11.364955 | orchestrator | 2025-05-13 19:58:11 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:58:11.364969 | orchestrator | 2025-05-13 19:58:11 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:58:11.365829 | orchestrator | 2025-05-13 19:58:11 | INFO  | Task 2448a32e-0bcd-4cb9-92e5-c1ae982d0abe is in state SUCCESS 2025-05-13 19:58:11.366316 | orchestrator | 2025-05-13 19:58:11 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:58:11.366382 | orchestrator | 2025-05-13 19:58:11 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:58:14.423243 | orchestrator | 2025-05-13 19:58:14 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:58:14.427360 | orchestrator | 2025-05-13 19:58:14 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:58:14.427389 | orchestrator | 2025-05-13 19:58:14 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:58:14.427395 | orchestrator | 2025-05-13 19:58:14 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:58:14.427401 | orchestrator | 2025-05-13 19:58:14 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:58:14.427407 | orchestrator | 2025-05-13 19:58:14 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:58:14.427413 | orchestrator | 2025-05-13 19:58:14 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:58:17.485868 | orchestrator | 2025-05-13 19:58:17 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:58:17.489996 | orchestrator | 2025-05-13 19:58:17 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state STARTED 2025-05-13 19:58:17.490291 | orchestrator | 2025-05-13 19:58:17 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:58:17.490316 | orchestrator | 2025-05-13 19:58:17 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:58:17.495857 | orchestrator | 2025-05-13 19:58:17 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:58:17.498118 | orchestrator | 2025-05-13 19:58:17 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:58:17.498189 | orchestrator | 2025-05-13 19:58:17 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:58:20.557052 | orchestrator | 2025-05-13 19:58:20 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:58:20.557320 | orchestrator | 2025-05-13 19:58:20 | INFO  | Task 74776c19-d026-4690-a210-14b76c312a60 is in state SUCCESS 2025-05-13 19:58:20.560921 | orchestrator | 2025-05-13 19:58:20 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:58:20.561046 | orchestrator | 2025-05-13 19:58:20 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:58:20.563212 | orchestrator | 2025-05-13 19:58:20 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:58:20.564409 | orchestrator | 2025-05-13 19:58:20 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:58:20.566161 | orchestrator | 2025-05-13 19:58:20 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:58:23.628361 | orchestrator | 2025-05-13 19:58:23 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:58:23.628475 | orchestrator | 2025-05-13 19:58:23 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:58:23.630902 | orchestrator | 2025-05-13 19:58:23 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:58:23.631844 | orchestrator | 2025-05-13 19:58:23 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:58:23.632844 | orchestrator | 2025-05-13 19:58:23 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:58:23.632875 | orchestrator | 2025-05-13 19:58:23 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:58:26.685854 | orchestrator | 2025-05-13 19:58:26 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:58:26.688107 | orchestrator | 2025-05-13 19:58:26 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:58:26.693168 | orchestrator | 2025-05-13 19:58:26 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:58:26.693421 | orchestrator | 2025-05-13 19:58:26 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:58:26.694999 | orchestrator | 2025-05-13 19:58:26 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:58:26.695046 | orchestrator | 2025-05-13 19:58:26 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:58:29.809839 | orchestrator | 2025-05-13 19:58:29 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:58:29.816250 | orchestrator | 2025-05-13 19:58:29 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:58:29.821437 | orchestrator | 2025-05-13 19:58:29 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:58:29.830985 | orchestrator | 2025-05-13 19:58:29 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:58:29.837414 | orchestrator | 2025-05-13 19:58:29 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:58:29.837471 | orchestrator | 2025-05-13 19:58:29 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:58:32.929364 | orchestrator | 2025-05-13 19:58:32 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:58:32.930284 | orchestrator | 2025-05-13 19:58:32 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:58:32.930340 | orchestrator | 2025-05-13 19:58:32 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:58:32.931040 | orchestrator | 2025-05-13 19:58:32 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:58:32.931444 | orchestrator | 2025-05-13 19:58:32 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:58:32.931474 | orchestrator | 2025-05-13 19:58:32 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:58:35.983264 | orchestrator | 2025-05-13 19:58:35 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:58:35.984444 | orchestrator | 2025-05-13 19:58:35 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:58:35.986396 | orchestrator | 2025-05-13 19:58:35 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:58:35.990440 | orchestrator | 2025-05-13 19:58:35 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:58:35.992792 | orchestrator | 2025-05-13 19:58:35 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:58:35.992844 | orchestrator | 2025-05-13 19:58:35 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:58:39.128829 | orchestrator | 2025-05-13 19:58:39 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:58:39.132275 | orchestrator | 2025-05-13 19:58:39 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:58:39.137303 | orchestrator | 2025-05-13 19:58:39 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:58:39.139237 | orchestrator | 2025-05-13 19:58:39 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:58:39.143083 | orchestrator | 2025-05-13 19:58:39 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:58:39.143136 | orchestrator | 2025-05-13 19:58:39 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:58:42.199819 | orchestrator | 2025-05-13 19:58:42 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:58:42.201893 | orchestrator | 2025-05-13 19:58:42 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:58:42.202741 | orchestrator | 2025-05-13 19:58:42 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:58:42.204065 | orchestrator | 2025-05-13 19:58:42 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:58:42.204622 | orchestrator | 2025-05-13 19:58:42 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:58:42.204693 | orchestrator | 2025-05-13 19:58:42 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:58:45.268976 | orchestrator | 2025-05-13 19:58:45 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:58:45.274343 | orchestrator | 2025-05-13 19:58:45 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:58:45.276893 | orchestrator | 2025-05-13 19:58:45 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:58:45.278260 | orchestrator | 2025-05-13 19:58:45 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:58:45.280513 | orchestrator | 2025-05-13 19:58:45 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:58:45.281491 | orchestrator | 2025-05-13 19:58:45 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:58:48.348430 | orchestrator | 2025-05-13 19:58:48 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:58:48.350386 | orchestrator | 2025-05-13 19:58:48 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:58:48.356526 | orchestrator | 2025-05-13 19:58:48 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:58:48.357531 | orchestrator | 2025-05-13 19:58:48 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:58:48.359486 | orchestrator | 2025-05-13 19:58:48 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:58:48.359537 | orchestrator | 2025-05-13 19:58:48 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:58:51.438273 | orchestrator | 2025-05-13 19:58:51 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:58:51.444398 | orchestrator | 2025-05-13 19:58:51 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:58:51.450538 | orchestrator | 2025-05-13 19:58:51 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:58:51.456722 | orchestrator | 2025-05-13 19:58:51 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:58:51.460963 | orchestrator | 2025-05-13 19:58:51 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:58:51.461171 | orchestrator | 2025-05-13 19:58:51 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:58:54.520083 | orchestrator | 2025-05-13 19:58:54 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:58:54.523835 | orchestrator | 2025-05-13 19:58:54 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:58:54.527488 | orchestrator | 2025-05-13 19:58:54 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:58:54.530468 | orchestrator | 2025-05-13 19:58:54 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:58:54.532316 | orchestrator | 2025-05-13 19:58:54 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:58:54.532353 | orchestrator | 2025-05-13 19:58:54 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:58:57.590301 | orchestrator | 2025-05-13 19:58:57 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:58:57.593672 | orchestrator | 2025-05-13 19:58:57 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:58:57.594162 | orchestrator | 2025-05-13 19:58:57 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:58:57.595223 | orchestrator | 2025-05-13 19:58:57 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:58:57.607176 | orchestrator | 2025-05-13 19:58:57 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:58:57.607314 | orchestrator | 2025-05-13 19:58:57 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:59:00.687614 | orchestrator | 2025-05-13 19:59:00 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state STARTED 2025-05-13 19:59:00.689820 | orchestrator | 2025-05-13 19:59:00 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:59:00.689847 | orchestrator | 2025-05-13 19:59:00 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:59:00.689852 | orchestrator | 2025-05-13 19:59:00 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:59:00.692080 | orchestrator | 2025-05-13 19:59:00 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:59:00.692107 | orchestrator | 2025-05-13 19:59:00 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:59:03.736536 | orchestrator | 2025-05-13 19:59:03 | INFO  | Task e7a1d52d-57d9-4074-bced-86d615957040 is in state SUCCESS 2025-05-13 19:59:03.737807 | orchestrator | 2025-05-13 19:59:03.737857 | orchestrator | 2025-05-13 19:59:03.737871 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-05-13 19:59:03.737882 | orchestrator | 2025-05-13 19:59:03.737893 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-05-13 19:59:03.737904 | orchestrator | Tuesday 13 May 2025 19:57:09 +0000 (0:00:13.257) 0:00:13.257 *********** 2025-05-13 19:59:03.737915 | orchestrator | ok: [testbed-manager] => { 2025-05-13 19:59:03.737925 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-05-13 19:59:03.737933 | orchestrator | } 2025-05-13 19:59:03.737939 | orchestrator | 2025-05-13 19:59:03.737958 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-05-13 19:59:03.737965 | orchestrator | Tuesday 13 May 2025 19:57:17 +0000 (0:00:08.566) 0:00:21.823 *********** 2025-05-13 19:59:03.737971 | orchestrator | ok: [testbed-manager] 2025-05-13 19:59:03.737978 | orchestrator | 2025-05-13 19:59:03.737985 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-05-13 19:59:03.737991 | orchestrator | Tuesday 13 May 2025 19:57:22 +0000 (0:00:04.660) 0:00:26.483 *********** 2025-05-13 19:59:03.737998 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-05-13 19:59:03.738004 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-05-13 19:59:03.738011 | orchestrator | 2025-05-13 19:59:03.738058 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-05-13 19:59:03.738065 | orchestrator | Tuesday 13 May 2025 19:57:29 +0000 (0:00:07.211) 0:00:33.695 *********** 2025-05-13 19:59:03.738072 | orchestrator | changed: [testbed-manager] 2025-05-13 19:59:03.738098 | orchestrator | 2025-05-13 19:59:03.738105 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-05-13 19:59:03.738117 | orchestrator | Tuesday 13 May 2025 19:57:32 +0000 (0:00:02.991) 0:00:36.687 *********** 2025-05-13 19:59:03.738123 | orchestrator | changed: [testbed-manager] 2025-05-13 19:59:03.738130 | orchestrator | 2025-05-13 19:59:03.738136 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-05-13 19:59:03.738143 | orchestrator | Tuesday 13 May 2025 19:57:38 +0000 (0:00:05.433) 0:00:42.121 *********** 2025-05-13 19:59:03.738149 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-05-13 19:59:03.738156 | orchestrator | ok: [testbed-manager] 2025-05-13 19:59:03.738162 | orchestrator | 2025-05-13 19:59:03.738168 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-05-13 19:59:03.738175 | orchestrator | Tuesday 13 May 2025 19:58:03 +0000 (0:00:25.355) 0:01:07.477 *********** 2025-05-13 19:59:03.738181 | orchestrator | changed: [testbed-manager] 2025-05-13 19:59:03.738200 | orchestrator | 2025-05-13 19:59:03.738207 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:59:03.738214 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:59:03.738221 | orchestrator | 2025-05-13 19:59:03.738227 | orchestrator | 2025-05-13 19:59:03.738234 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 19:59:03.738240 | orchestrator | Tuesday 13 May 2025 19:58:08 +0000 (0:00:05.468) 0:01:12.945 *********** 2025-05-13 19:59:03.738246 | orchestrator | =============================================================================== 2025-05-13 19:59:03.738253 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.36s 2025-05-13 19:59:03.738259 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 8.57s 2025-05-13 19:59:03.738265 | orchestrator | osism.services.homer : Create required directories ---------------------- 7.21s 2025-05-13 19:59:03.738271 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 5.47s 2025-05-13 19:59:03.738278 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 5.43s 2025-05-13 19:59:03.738284 | orchestrator | osism.services.homer : Create traefik external network ------------------ 4.66s 2025-05-13 19:59:03.738290 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.99s 2025-05-13 19:59:03.738296 | orchestrator | 2025-05-13 19:59:03.738303 | orchestrator | 2025-05-13 19:59:03.738309 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-05-13 19:59:03.738315 | orchestrator | 2025-05-13 19:59:03.738321 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-05-13 19:59:03.738328 | orchestrator | Tuesday 13 May 2025 19:57:08 +0000 (0:00:12.045) 0:00:12.045 *********** 2025-05-13 19:59:03.738334 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-05-13 19:59:03.738342 | orchestrator | 2025-05-13 19:59:03.738348 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-05-13 19:59:03.738354 | orchestrator | Tuesday 13 May 2025 19:57:18 +0000 (0:00:10.396) 0:00:22.441 *********** 2025-05-13 19:59:03.738360 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-05-13 19:59:03.738366 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-05-13 19:59:03.738373 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-05-13 19:59:03.738379 | orchestrator | 2025-05-13 19:59:03.738386 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-05-13 19:59:03.738393 | orchestrator | Tuesday 13 May 2025 19:57:23 +0000 (0:00:04.725) 0:00:27.167 *********** 2025-05-13 19:59:03.738400 | orchestrator | changed: [testbed-manager] 2025-05-13 19:59:03.738407 | orchestrator | 2025-05-13 19:59:03.738414 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-05-13 19:59:03.738422 | orchestrator | Tuesday 13 May 2025 19:57:28 +0000 (0:00:05.346) 0:00:32.514 *********** 2025-05-13 19:59:03.738477 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-05-13 19:59:03.738488 | orchestrator | ok: [testbed-manager] 2025-05-13 19:59:03.738496 | orchestrator | 2025-05-13 19:59:03.738504 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-05-13 19:59:03.738512 | orchestrator | Tuesday 13 May 2025 19:58:02 +0000 (0:00:33.146) 0:01:05.661 *********** 2025-05-13 19:59:03.738532 | orchestrator | changed: [testbed-manager] 2025-05-13 19:59:03.738541 | orchestrator | 2025-05-13 19:59:03.738599 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-05-13 19:59:03.738607 | orchestrator | Tuesday 13 May 2025 19:58:04 +0000 (0:00:02.317) 0:01:07.978 *********** 2025-05-13 19:59:03.738616 | orchestrator | ok: [testbed-manager] 2025-05-13 19:59:03.738624 | orchestrator | 2025-05-13 19:59:03.738633 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-05-13 19:59:03.738648 | orchestrator | Tuesday 13 May 2025 19:58:07 +0000 (0:00:02.768) 0:01:10.746 *********** 2025-05-13 19:59:03.738657 | orchestrator | changed: [testbed-manager] 2025-05-13 19:59:03.738665 | orchestrator | 2025-05-13 19:59:03.738673 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-05-13 19:59:03.738681 | orchestrator | Tuesday 13 May 2025 19:58:10 +0000 (0:00:03.275) 0:01:14.022 *********** 2025-05-13 19:59:03.738689 | orchestrator | changed: [testbed-manager] 2025-05-13 19:59:03.738697 | orchestrator | 2025-05-13 19:59:03.738705 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-05-13 19:59:03.738713 | orchestrator | Tuesday 13 May 2025 19:58:12 +0000 (0:00:01.953) 0:01:15.976 *********** 2025-05-13 19:59:03.738721 | orchestrator | changed: [testbed-manager] 2025-05-13 19:59:03.738730 | orchestrator | 2025-05-13 19:59:03.738738 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-05-13 19:59:03.738751 | orchestrator | Tuesday 13 May 2025 19:58:15 +0000 (0:00:03.357) 0:01:19.333 *********** 2025-05-13 19:59:03.738758 | orchestrator | ok: [testbed-manager] 2025-05-13 19:59:03.738765 | orchestrator | 2025-05-13 19:59:03.738772 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:59:03.738780 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:59:03.738787 | orchestrator | 2025-05-13 19:59:03.738794 | orchestrator | 2025-05-13 19:59:03.738801 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 19:59:03.738808 | orchestrator | Tuesday 13 May 2025 19:58:18 +0000 (0:00:02.817) 0:01:22.150 *********** 2025-05-13 19:59:03.738816 | orchestrator | =============================================================================== 2025-05-13 19:59:03.738823 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.15s 2025-05-13 19:59:03.738830 | orchestrator | osism.services.openstackclient : Include tasks ------------------------- 10.40s 2025-05-13 19:59:03.738837 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 5.35s 2025-05-13 19:59:03.738844 | orchestrator | osism.services.openstackclient : Create required directories ------------ 4.73s 2025-05-13 19:59:03.738851 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 3.36s 2025-05-13 19:59:03.738858 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.28s 2025-05-13 19:59:03.738865 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 2.82s 2025-05-13 19:59:03.738872 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 2.77s 2025-05-13 19:59:03.738879 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.32s 2025-05-13 19:59:03.738886 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.95s 2025-05-13 19:59:03.738893 | orchestrator | 2025-05-13 19:59:03.738905 | orchestrator | 2025-05-13 19:59:03.738921 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 19:59:03.738938 | orchestrator | 2025-05-13 19:59:03.738949 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 19:59:03.738961 | orchestrator | Tuesday 13 May 2025 19:56:59 +0000 (0:00:04.652) 0:00:04.652 *********** 2025-05-13 19:59:03.738972 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-05-13 19:59:03.738984 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-05-13 19:59:03.738995 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-05-13 19:59:03.739007 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-05-13 19:59:03.739019 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-05-13 19:59:03.739031 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-05-13 19:59:03.739044 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-05-13 19:59:03.739066 | orchestrator | 2025-05-13 19:59:03.739078 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-05-13 19:59:03.739086 | orchestrator | 2025-05-13 19:59:03.739094 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-05-13 19:59:03.739101 | orchestrator | Tuesday 13 May 2025 19:57:15 +0000 (0:00:15.664) 0:00:20.317 *********** 2025-05-13 19:59:03.739118 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:59:03.739131 | orchestrator | 2025-05-13 19:59:03.739138 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-05-13 19:59:03.739146 | orchestrator | Tuesday 13 May 2025 19:57:22 +0000 (0:00:07.906) 0:00:28.223 *********** 2025-05-13 19:59:03.739153 | orchestrator | ok: [testbed-manager] 2025-05-13 19:59:03.739160 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:59:03.739167 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:59:03.739175 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:59:03.739182 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:59:03.739196 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:59:03.739203 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:59:03.739210 | orchestrator | 2025-05-13 19:59:03.739217 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-05-13 19:59:03.739225 | orchestrator | Tuesday 13 May 2025 19:57:31 +0000 (0:00:08.097) 0:00:36.321 *********** 2025-05-13 19:59:03.739232 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:59:03.739239 | orchestrator | ok: [testbed-manager] 2025-05-13 19:59:03.739246 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:59:03.739253 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:59:03.739260 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:59:03.739267 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:59:03.739274 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:59:03.739282 | orchestrator | 2025-05-13 19:59:03.739289 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-05-13 19:59:03.739296 | orchestrator | Tuesday 13 May 2025 19:57:39 +0000 (0:00:08.070) 0:00:44.392 *********** 2025-05-13 19:59:03.739303 | orchestrator | changed: [testbed-manager] 2025-05-13 19:59:03.739310 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:59:03.739318 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:59:03.739325 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:59:03.739332 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:59:03.739339 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:59:03.739346 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:59:03.739353 | orchestrator | 2025-05-13 19:59:03.739360 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-05-13 19:59:03.739368 | orchestrator | Tuesday 13 May 2025 19:57:42 +0000 (0:00:03.603) 0:00:47.996 *********** 2025-05-13 19:59:03.739375 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:59:03.739382 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:59:03.739389 | orchestrator | changed: [testbed-manager] 2025-05-13 19:59:03.739396 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:59:03.739403 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:59:03.739413 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:59:03.739421 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:59:03.739428 | orchestrator | 2025-05-13 19:59:03.739435 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-05-13 19:59:03.739442 | orchestrator | Tuesday 13 May 2025 19:57:56 +0000 (0:00:14.091) 0:01:02.088 *********** 2025-05-13 19:59:03.739449 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:59:03.739456 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:59:03.739463 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:59:03.739471 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:59:03.739478 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:59:03.739485 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:59:03.739501 | orchestrator | changed: [testbed-manager] 2025-05-13 19:59:03.739508 | orchestrator | 2025-05-13 19:59:03.739515 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-05-13 19:59:03.739522 | orchestrator | Tuesday 13 May 2025 19:58:18 +0000 (0:00:21.643) 0:01:23.731 *********** 2025-05-13 19:59:03.739530 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:59:03.739539 | orchestrator | 2025-05-13 19:59:03.739564 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-05-13 19:59:03.739572 | orchestrator | Tuesday 13 May 2025 19:58:21 +0000 (0:00:03.496) 0:01:27.228 *********** 2025-05-13 19:59:03.739579 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-05-13 19:59:03.739587 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-05-13 19:59:03.739594 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-05-13 19:59:03.739601 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-05-13 19:59:03.739608 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-05-13 19:59:03.739615 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-05-13 19:59:03.739622 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-05-13 19:59:03.739629 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-05-13 19:59:03.739636 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-05-13 19:59:03.739643 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-05-13 19:59:03.739650 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-05-13 19:59:03.739657 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-05-13 19:59:03.739664 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-05-13 19:59:03.739670 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-05-13 19:59:03.739677 | orchestrator | 2025-05-13 19:59:03.739685 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-05-13 19:59:03.739692 | orchestrator | Tuesday 13 May 2025 19:58:33 +0000 (0:00:12.032) 0:01:39.261 *********** 2025-05-13 19:59:03.739699 | orchestrator | ok: [testbed-manager] 2025-05-13 19:59:03.739706 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:59:03.739713 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:59:03.739720 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:59:03.739727 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:59:03.739735 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:59:03.739742 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:59:03.739749 | orchestrator | 2025-05-13 19:59:03.739756 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-05-13 19:59:03.739763 | orchestrator | Tuesday 13 May 2025 19:58:36 +0000 (0:00:02.745) 0:01:42.006 *********** 2025-05-13 19:59:03.739771 | orchestrator | changed: [testbed-manager] 2025-05-13 19:59:03.739778 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:59:03.739785 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:59:03.739792 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:59:03.739799 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:59:03.739806 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:59:03.739814 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:59:03.739821 | orchestrator | 2025-05-13 19:59:03.739828 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-05-13 19:59:03.739840 | orchestrator | Tuesday 13 May 2025 19:58:40 +0000 (0:00:03.863) 0:01:45.870 *********** 2025-05-13 19:59:03.739847 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:59:03.739855 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:59:03.739862 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:59:03.739869 | orchestrator | ok: [testbed-manager] 2025-05-13 19:59:03.739876 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:59:03.739883 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:59:03.739895 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:59:03.739902 | orchestrator | 2025-05-13 19:59:03.739909 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-05-13 19:59:03.739916 | orchestrator | Tuesday 13 May 2025 19:58:44 +0000 (0:00:03.413) 0:01:49.283 *********** 2025-05-13 19:59:03.739923 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:59:03.739931 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:59:03.739937 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:59:03.739945 | orchestrator | ok: [testbed-manager] 2025-05-13 19:59:03.739952 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:59:03.739959 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:59:03.739966 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:59:03.739973 | orchestrator | 2025-05-13 19:59:03.739980 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-05-13 19:59:03.739987 | orchestrator | Tuesday 13 May 2025 19:58:47 +0000 (0:00:03.343) 0:01:52.627 *********** 2025-05-13 19:59:03.739994 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-05-13 19:59:03.740002 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:59:03.740010 | orchestrator | 2025-05-13 19:59:03.740017 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-05-13 19:59:03.740027 | orchestrator | Tuesday 13 May 2025 19:58:50 +0000 (0:00:03.225) 0:01:55.852 *********** 2025-05-13 19:59:03.740035 | orchestrator | changed: [testbed-manager] 2025-05-13 19:59:03.740042 | orchestrator | 2025-05-13 19:59:03.740049 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-05-13 19:59:03.740058 | orchestrator | Tuesday 13 May 2025 19:58:53 +0000 (0:00:02.844) 0:01:58.697 *********** 2025-05-13 19:59:03.740070 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:59:03.740082 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:59:03.740095 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:59:03.740107 | orchestrator | changed: [testbed-manager] 2025-05-13 19:59:03.740119 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:59:03.740131 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:59:03.740142 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:59:03.740154 | orchestrator | 2025-05-13 19:59:03.740166 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:59:03.740179 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:59:03.740191 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:59:03.740204 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:59:03.740216 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:59:03.740229 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:59:03.740291 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:59:03.740299 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:59:03.740306 | orchestrator | 2025-05-13 19:59:03.740313 | orchestrator | 2025-05-13 19:59:03.740321 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 19:59:03.740328 | orchestrator | Tuesday 13 May 2025 19:59:01 +0000 (0:00:08.454) 0:02:07.152 *********** 2025-05-13 19:59:03.740343 | orchestrator | =============================================================================== 2025-05-13 19:59:03.740351 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 21.64s 2025-05-13 19:59:03.740359 | orchestrator | Group hosts based on enabled services ---------------------------------- 15.66s 2025-05-13 19:59:03.740366 | orchestrator | osism.services.netdata : Add repository -------------------------------- 14.09s 2025-05-13 19:59:03.740374 | orchestrator | osism.services.netdata : Copy configuration files ---------------------- 12.03s 2025-05-13 19:59:03.740381 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 8.45s 2025-05-13 19:59:03.740389 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 8.10s 2025-05-13 19:59:03.740397 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 8.07s 2025-05-13 19:59:03.740404 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 7.91s 2025-05-13 19:59:03.740412 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 3.86s 2025-05-13 19:59:03.740420 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.60s 2025-05-13 19:59:03.740428 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 3.50s 2025-05-13 19:59:03.740442 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 3.41s 2025-05-13 19:59:03.740450 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 3.34s 2025-05-13 19:59:03.740457 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 3.23s 2025-05-13 19:59:03.740465 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.84s 2025-05-13 19:59:03.740473 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 2.75s 2025-05-13 19:59:03.740613 | orchestrator | 2025-05-13 19:59:03 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:59:03.740628 | orchestrator | 2025-05-13 19:59:03 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:59:03.741407 | orchestrator | 2025-05-13 19:59:03 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:59:03.742804 | orchestrator | 2025-05-13 19:59:03 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:59:03.742830 | orchestrator | 2025-05-13 19:59:03 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:59:06.799638 | orchestrator | 2025-05-13 19:59:06 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:59:06.800683 | orchestrator | 2025-05-13 19:59:06 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:59:06.802786 | orchestrator | 2025-05-13 19:59:06 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:59:06.804153 | orchestrator | 2025-05-13 19:59:06 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:59:06.804575 | orchestrator | 2025-05-13 19:59:06 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:59:09.852349 | orchestrator | 2025-05-13 19:59:09 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:59:09.853494 | orchestrator | 2025-05-13 19:59:09 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:59:09.855633 | orchestrator | 2025-05-13 19:59:09 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:59:09.856874 | orchestrator | 2025-05-13 19:59:09 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:59:09.856909 | orchestrator | 2025-05-13 19:59:09 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:59:12.902871 | orchestrator | 2025-05-13 19:59:12 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:59:12.904171 | orchestrator | 2025-05-13 19:59:12 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:59:12.906235 | orchestrator | 2025-05-13 19:59:12 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:59:12.906839 | orchestrator | 2025-05-13 19:59:12 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:59:12.907367 | orchestrator | 2025-05-13 19:59:12 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:59:15.951855 | orchestrator | 2025-05-13 19:59:15 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:59:15.953021 | orchestrator | 2025-05-13 19:59:15 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:59:15.954327 | orchestrator | 2025-05-13 19:59:15 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:59:15.955761 | orchestrator | 2025-05-13 19:59:15 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:59:15.955830 | orchestrator | 2025-05-13 19:59:15 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:59:19.005712 | orchestrator | 2025-05-13 19:59:19 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state STARTED 2025-05-13 19:59:19.005860 | orchestrator | 2025-05-13 19:59:19 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:59:19.005874 | orchestrator | 2025-05-13 19:59:19 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:59:19.007409 | orchestrator | 2025-05-13 19:59:19 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:59:19.007430 | orchestrator | 2025-05-13 19:59:19 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:59:22.058464 | orchestrator | 2025-05-13 19:59:22 | INFO  | Task 562f9700-1108-4a81-8c95-792fe7530ebf is in state SUCCESS 2025-05-13 19:59:22.059356 | orchestrator | 2025-05-13 19:59:22 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:59:22.061172 | orchestrator | 2025-05-13 19:59:22 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:59:22.062115 | orchestrator | 2025-05-13 19:59:22 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:59:22.062148 | orchestrator | 2025-05-13 19:59:22 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:59:25.107875 | orchestrator | 2025-05-13 19:59:25 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:59:25.107992 | orchestrator | 2025-05-13 19:59:25 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:59:25.108682 | orchestrator | 2025-05-13 19:59:25 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:59:25.108718 | orchestrator | 2025-05-13 19:59:25 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:59:28.155661 | orchestrator | 2025-05-13 19:59:28 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:59:28.156645 | orchestrator | 2025-05-13 19:59:28 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:59:28.158200 | orchestrator | 2025-05-13 19:59:28 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:59:28.158235 | orchestrator | 2025-05-13 19:59:28 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:59:31.216627 | orchestrator | 2025-05-13 19:59:31 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:59:31.217489 | orchestrator | 2025-05-13 19:59:31 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:59:31.218572 | orchestrator | 2025-05-13 19:59:31 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:59:31.218602 | orchestrator | 2025-05-13 19:59:31 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:59:34.273694 | orchestrator | 2025-05-13 19:59:34 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:59:34.273955 | orchestrator | 2025-05-13 19:59:34 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:59:34.275971 | orchestrator | 2025-05-13 19:59:34 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state STARTED 2025-05-13 19:59:34.276771 | orchestrator | 2025-05-13 19:59:34 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:59:37.320305 | orchestrator | 2025-05-13 19:59:37 | INFO  | Task e96bd524-31b3-4a6a-bbda-2e10e40e18e5 is in state STARTED 2025-05-13 19:59:37.321597 | orchestrator | 2025-05-13 19:59:37 | INFO  | Task d642b4cc-fd13-40ca-bf48-e52ce46b7a31 is in state STARTED 2025-05-13 19:59:37.322442 | orchestrator | 2025-05-13 19:59:37 | INFO  | Task a715fec6-2905-4ca4-831f-9aff84ab1886 is in state STARTED 2025-05-13 19:59:37.323112 | orchestrator | 2025-05-13 19:59:37 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:59:37.323843 | orchestrator | 2025-05-13 19:59:37 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:59:37.325397 | orchestrator | 2025-05-13 19:59:37 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 19:59:37.329104 | orchestrator | 2025-05-13 19:59:37 | INFO  | Task 2305edb8-8227-47b8-9713-c581a0fc907c is in state SUCCESS 2025-05-13 19:59:37.331667 | orchestrator | 2025-05-13 19:59:37.331723 | orchestrator | 2025-05-13 19:59:37.331745 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-05-13 19:59:37.331766 | orchestrator | 2025-05-13 19:59:37.331786 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-05-13 19:59:37.331807 | orchestrator | Tuesday 13 May 2025 19:57:51 +0000 (0:00:05.009) 0:00:05.009 *********** 2025-05-13 19:59:37.331871 | orchestrator | ok: [testbed-manager] 2025-05-13 19:59:37.331892 | orchestrator | 2025-05-13 19:59:37.331911 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-05-13 19:59:37.331930 | orchestrator | Tuesday 13 May 2025 19:57:55 +0000 (0:00:03.297) 0:00:08.307 *********** 2025-05-13 19:59:37.331950 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-05-13 19:59:37.331969 | orchestrator | 2025-05-13 19:59:37.331988 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-05-13 19:59:37.332008 | orchestrator | Tuesday 13 May 2025 19:57:57 +0000 (0:00:02.104) 0:00:10.412 *********** 2025-05-13 19:59:37.332027 | orchestrator | changed: [testbed-manager] 2025-05-13 19:59:37.332046 | orchestrator | 2025-05-13 19:59:37.332063 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-05-13 19:59:37.332081 | orchestrator | Tuesday 13 May 2025 19:57:59 +0000 (0:00:02.739) 0:00:13.152 *********** 2025-05-13 19:59:37.332099 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-05-13 19:59:37.332117 | orchestrator | ok: [testbed-manager] 2025-05-13 19:59:37.332135 | orchestrator | 2025-05-13 19:59:37.332152 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-05-13 19:59:37.332170 | orchestrator | Tuesday 13 May 2025 19:59:00 +0000 (0:01:00.816) 0:01:13.969 *********** 2025-05-13 19:59:37.332187 | orchestrator | changed: [testbed-manager] 2025-05-13 19:59:37.332205 | orchestrator | 2025-05-13 19:59:37.332223 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:59:37.332269 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 19:59:37.332290 | orchestrator | 2025-05-13 19:59:37.332311 | orchestrator | 2025-05-13 19:59:37.332331 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 19:59:37.332349 | orchestrator | Tuesday 13 May 2025 19:59:21 +0000 (0:00:20.381) 0:01:34.350 *********** 2025-05-13 19:59:37.332368 | orchestrator | =============================================================================== 2025-05-13 19:59:37.332388 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 60.82s 2025-05-13 19:59:37.332407 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 20.38s 2025-05-13 19:59:37.332426 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 3.30s 2025-05-13 19:59:37.332444 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 2.74s 2025-05-13 19:59:37.332462 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 2.11s 2025-05-13 19:59:37.332509 | orchestrator | 2025-05-13 19:59:37.332528 | orchestrator | 2025-05-13 19:59:37.332547 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-05-13 19:59:37.332565 | orchestrator | 2025-05-13 19:59:37.332584 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-13 19:59:37.332603 | orchestrator | Tuesday 13 May 2025 19:56:48 +0000 (0:00:00.250) 0:00:00.250 *********** 2025-05-13 19:59:37.332633 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:59:37.332654 | orchestrator | 2025-05-13 19:59:37.332674 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-05-13 19:59:37.332694 | orchestrator | Tuesday 13 May 2025 19:56:49 +0000 (0:00:01.338) 0:00:01.589 *********** 2025-05-13 19:59:37.332712 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-13 19:59:37.332730 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-13 19:59:37.332747 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-13 19:59:37.332765 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-13 19:59:37.332781 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-13 19:59:37.332798 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-13 19:59:37.332815 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-13 19:59:37.332833 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-13 19:59:37.332850 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-13 19:59:37.332868 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-13 19:59:37.332884 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-13 19:59:37.332901 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-13 19:59:37.332917 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-13 19:59:37.332935 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-13 19:59:37.332953 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-13 19:59:37.332969 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-13 19:59:37.333004 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-13 19:59:37.333020 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-13 19:59:37.333053 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-13 19:59:37.333069 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-13 19:59:37.333085 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-13 19:59:37.333102 | orchestrator | 2025-05-13 19:59:37.333119 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-13 19:59:37.333135 | orchestrator | Tuesday 13 May 2025 19:56:53 +0000 (0:00:04.056) 0:00:05.645 *********** 2025-05-13 19:59:37.333152 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 19:59:37.333169 | orchestrator | 2025-05-13 19:59:37.333186 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-05-13 19:59:37.333202 | orchestrator | Tuesday 13 May 2025 19:56:55 +0000 (0:00:01.471) 0:00:07.117 *********** 2025-05-13 19:59:37.333224 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.333246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.333273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.333291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.333309 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.333327 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.333368 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.333386 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.333404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.333421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.333444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.333461 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.333525 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.333571 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.333589 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.333607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.333625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.333642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.333665 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.333683 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.333699 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.333726 | orchestrator | 2025-05-13 19:59:37.333743 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-05-13 19:59:37.333759 | orchestrator | Tuesday 13 May 2025 19:56:59 +0000 (0:00:04.661) 0:00:11.778 *********** 2025-05-13 19:59:37.333786 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 19:59:37.333805 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.333822 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.333838 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:59:37.333855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 19:59:37.333872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 19:59:37.333897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.333915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.333943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.333973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.333992 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:59:37.334009 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:59:37.334141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 19:59:37.334172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.334188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.334205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 19:59:37.334220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.334246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.334262 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:59:37.334277 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:59:37.334300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 19:59:37.334327 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.334344 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.334359 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:59:37.334374 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 19:59:37.334389 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.334409 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.334433 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:59:37.334448 | orchestrator | 2025-05-13 19:59:37.334462 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-05-13 19:59:37.334476 | orchestrator | Tuesday 13 May 2025 19:57:01 +0000 (0:00:01.164) 0:00:12.943 *********** 2025-05-13 19:59:37.334528 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 19:59:37.334542 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.334565 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.334581 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:59:37.334594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 19:59:37.334609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.334623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.334638 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:59:37.334658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 19:59:37.334683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.334699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.334714 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:59:37.334728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 19:59:37.334861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.334952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.334971 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:59:37.334986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 19:59:37.334998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.335044 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.335057 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:59:37.335069 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 19:59:37.335080 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.335108 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.335120 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:59:37.335132 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-13 19:59:37.335144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.335155 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.335174 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:59:37.335185 | orchestrator | 2025-05-13 19:59:37.335197 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-05-13 19:59:37.335209 | orchestrator | Tuesday 13 May 2025 19:57:02 +0000 (0:00:01.900) 0:00:14.843 *********** 2025-05-13 19:59:37.335220 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:59:37.335230 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:59:37.335241 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:59:37.335252 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:59:37.335263 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:59:37.335273 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:59:37.335284 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:59:37.335294 | orchestrator | 2025-05-13 19:59:37.335305 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-05-13 19:59:37.335316 | orchestrator | Tuesday 13 May 2025 19:57:03 +0000 (0:00:01.008) 0:00:15.851 *********** 2025-05-13 19:59:37.335327 | orchestrator | skipping: [testbed-manager] 2025-05-13 19:59:37.335338 | orchestrator | skipping: [testbed-node-0] 2025-05-13 19:59:37.335353 | orchestrator | skipping: [testbed-node-1] 2025-05-13 19:59:37.335365 | orchestrator | skipping: [testbed-node-2] 2025-05-13 19:59:37.335376 | orchestrator | skipping: [testbed-node-3] 2025-05-13 19:59:37.335389 | orchestrator | skipping: [testbed-node-4] 2025-05-13 19:59:37.335400 | orchestrator | skipping: [testbed-node-5] 2025-05-13 19:59:37.335412 | orchestrator | 2025-05-13 19:59:37.335424 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-05-13 19:59:37.335436 | orchestrator | Tuesday 13 May 2025 19:57:04 +0000 (0:00:01.009) 0:00:16.860 *********** 2025-05-13 19:59:37.335449 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.335462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.335506 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.335521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.335541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.335554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.335567 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.335585 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.335598 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.335610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.335628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.335646 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.335657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.335669 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.335685 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.335697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.335709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.335727 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.335739 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.335757 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.335769 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.335780 | orchestrator | 2025-05-13 19:59:37.335791 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-05-13 19:59:37.335801 | orchestrator | Tuesday 13 May 2025 19:57:10 +0000 (0:00:05.312) 0:00:22.173 *********** 2025-05-13 19:59:37.335813 | orchestrator | [WARNING]: Skipped 2025-05-13 19:59:37.335825 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-05-13 19:59:37.335836 | orchestrator | to this access issue: 2025-05-13 19:59:37.335847 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-05-13 19:59:37.335858 | orchestrator | directory 2025-05-13 19:59:37.335869 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-13 19:59:37.335880 | orchestrator | 2025-05-13 19:59:37.335890 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-05-13 19:59:37.335901 | orchestrator | Tuesday 13 May 2025 19:57:11 +0000 (0:00:01.376) 0:00:23.549 *********** 2025-05-13 19:59:37.335911 | orchestrator | [WARNING]: Skipped 2025-05-13 19:59:37.335922 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-05-13 19:59:37.335933 | orchestrator | to this access issue: 2025-05-13 19:59:37.335944 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-05-13 19:59:37.335954 | orchestrator | directory 2025-05-13 19:59:37.335965 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-13 19:59:37.335976 | orchestrator | 2025-05-13 19:59:37.335986 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-05-13 19:59:37.336002 | orchestrator | Tuesday 13 May 2025 19:57:12 +0000 (0:00:00.731) 0:00:24.281 *********** 2025-05-13 19:59:37.336013 | orchestrator | [WARNING]: Skipped 2025-05-13 19:59:37.336023 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-05-13 19:59:37.336034 | orchestrator | to this access issue: 2025-05-13 19:59:37.336045 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-05-13 19:59:37.336055 | orchestrator | directory 2025-05-13 19:59:37.336066 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-13 19:59:37.336077 | orchestrator | 2025-05-13 19:59:37.336088 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-05-13 19:59:37.336098 | orchestrator | Tuesday 13 May 2025 19:57:13 +0000 (0:00:00.757) 0:00:25.039 *********** 2025-05-13 19:59:37.336109 | orchestrator | [WARNING]: Skipped 2025-05-13 19:59:37.336120 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-05-13 19:59:37.336130 | orchestrator | to this access issue: 2025-05-13 19:59:37.336141 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-05-13 19:59:37.336152 | orchestrator | directory 2025-05-13 19:59:37.336162 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-13 19:59:37.336173 | orchestrator | 2025-05-13 19:59:37.336184 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-05-13 19:59:37.336201 | orchestrator | Tuesday 13 May 2025 19:57:13 +0000 (0:00:00.658) 0:00:25.697 *********** 2025-05-13 19:59:37.336212 | orchestrator | changed: [testbed-manager] 2025-05-13 19:59:37.336223 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:59:37.336234 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:59:37.336244 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:59:37.336255 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:59:37.336265 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:59:37.336276 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:59:37.336286 | orchestrator | 2025-05-13 19:59:37.336297 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-05-13 19:59:37.336321 | orchestrator | Tuesday 13 May 2025 19:57:18 +0000 (0:00:04.348) 0:00:30.045 *********** 2025-05-13 19:59:37.336333 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-13 19:59:37.336344 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-13 19:59:37.336355 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-13 19:59:37.336382 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-13 19:59:37.336394 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-13 19:59:37.336405 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-13 19:59:37.336416 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-13 19:59:37.336426 | orchestrator | 2025-05-13 19:59:37.336437 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-05-13 19:59:37.336448 | orchestrator | Tuesday 13 May 2025 19:57:21 +0000 (0:00:03.773) 0:00:33.820 *********** 2025-05-13 19:59:37.336458 | orchestrator | changed: [testbed-manager] 2025-05-13 19:59:37.336469 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:59:37.336480 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:59:37.336510 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:59:37.336521 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:59:37.336531 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:59:37.336542 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:59:37.336552 | orchestrator | 2025-05-13 19:59:37.336573 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-05-13 19:59:37.336585 | orchestrator | Tuesday 13 May 2025 19:57:25 +0000 (0:00:03.573) 0:00:37.394 *********** 2025-05-13 19:59:37.336596 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.336607 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.336619 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.336638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.336650 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.336674 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.336693 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.336705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.336716 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.336727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.336750 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.336762 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.336773 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.336791 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.336803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.336815 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.336826 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.336842 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 19:59:37.336864 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.336875 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.336886 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.336898 | orchestrator | 2025-05-13 19:59:37.336909 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-05-13 19:59:37.336920 | orchestrator | Tuesday 13 May 2025 19:57:27 +0000 (0:00:02.101) 0:00:39.495 *********** 2025-05-13 19:59:37.336930 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-13 19:59:37.336941 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-13 19:59:37.336952 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-13 19:59:37.336968 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-13 19:59:37.336979 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-13 19:59:37.336990 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-13 19:59:37.337000 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-13 19:59:37.337011 | orchestrator | 2025-05-13 19:59:37.337022 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-05-13 19:59:37.337032 | orchestrator | Tuesday 13 May 2025 19:57:30 +0000 (0:00:02.570) 0:00:42.066 *********** 2025-05-13 19:59:37.337043 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-13 19:59:37.337054 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-13 19:59:37.337064 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-13 19:59:37.337075 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-13 19:59:37.337086 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-13 19:59:37.337096 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-13 19:59:37.337107 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-13 19:59:37.337124 | orchestrator | 2025-05-13 19:59:37.337135 | orchestrator | TASK [common : Check common containers] **************************************** 2025-05-13 19:59:37.337146 | orchestrator | Tuesday 13 May 2025 19:57:33 +0000 (0:00:03.221) 0:00:45.288 *********** 2025-05-13 19:59:37.337157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.337174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.337186 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.337197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.337214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.337226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.337238 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.337255 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.337267 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.337283 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-13 19:59:37.337295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.337307 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.337318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.337336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.337348 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.337366 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.337377 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.337393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.337405 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.337416 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.337428 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 19:59:37.337439 | orchestrator | 2025-05-13 19:59:37.337456 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-05-13 19:59:37.337467 | orchestrator | Tuesday 13 May 2025 19:57:37 +0000 (0:00:03.583) 0:00:48.871 *********** 2025-05-13 19:59:37.337478 | orchestrator | changed: [testbed-manager] 2025-05-13 19:59:37.337508 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:59:37.337519 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:59:37.337529 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:59:37.337547 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:59:37.337558 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:59:37.337569 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:59:37.337580 | orchestrator | 2025-05-13 19:59:37.337591 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-05-13 19:59:37.337601 | orchestrator | Tuesday 13 May 2025 19:57:38 +0000 (0:00:01.860) 0:00:50.731 *********** 2025-05-13 19:59:37.337612 | orchestrator | changed: [testbed-manager] 2025-05-13 19:59:37.337623 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:59:37.337633 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:59:37.337644 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:59:37.337655 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:59:37.337666 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:59:37.337677 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:59:37.337687 | orchestrator | 2025-05-13 19:59:37.337698 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-13 19:59:37.337709 | orchestrator | Tuesday 13 May 2025 19:57:40 +0000 (0:00:01.548) 0:00:52.280 *********** 2025-05-13 19:59:37.337720 | orchestrator | 2025-05-13 19:59:37.337730 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-13 19:59:37.337741 | orchestrator | Tuesday 13 May 2025 19:57:40 +0000 (0:00:00.108) 0:00:52.389 *********** 2025-05-13 19:59:37.337752 | orchestrator | 2025-05-13 19:59:37.337762 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-13 19:59:37.337773 | orchestrator | Tuesday 13 May 2025 19:57:40 +0000 (0:00:00.109) 0:00:52.499 *********** 2025-05-13 19:59:37.337784 | orchestrator | 2025-05-13 19:59:37.337795 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-13 19:59:37.337805 | orchestrator | Tuesday 13 May 2025 19:57:40 +0000 (0:00:00.246) 0:00:52.745 *********** 2025-05-13 19:59:37.337816 | orchestrator | 2025-05-13 19:59:37.337826 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-13 19:59:37.337837 | orchestrator | Tuesday 13 May 2025 19:57:40 +0000 (0:00:00.106) 0:00:52.851 *********** 2025-05-13 19:59:37.337848 | orchestrator | 2025-05-13 19:59:37.337859 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-13 19:59:37.337870 | orchestrator | Tuesday 13 May 2025 19:57:41 +0000 (0:00:00.091) 0:00:52.943 *********** 2025-05-13 19:59:37.337880 | orchestrator | 2025-05-13 19:59:37.337891 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-13 19:59:37.337902 | orchestrator | Tuesday 13 May 2025 19:57:41 +0000 (0:00:00.086) 0:00:53.029 *********** 2025-05-13 19:59:37.337912 | orchestrator | 2025-05-13 19:59:37.337923 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-05-13 19:59:37.337934 | orchestrator | Tuesday 13 May 2025 19:57:41 +0000 (0:00:00.175) 0:00:53.205 *********** 2025-05-13 19:59:37.337944 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:59:37.337955 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:59:37.337966 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:59:37.337976 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:59:37.337987 | orchestrator | changed: [testbed-manager] 2025-05-13 19:59:37.337998 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:59:37.338008 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:59:37.338055 | orchestrator | 2025-05-13 19:59:37.338072 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-05-13 19:59:37.338083 | orchestrator | Tuesday 13 May 2025 19:58:25 +0000 (0:00:44.502) 0:01:37.707 *********** 2025-05-13 19:59:37.338097 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:59:37.338108 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:59:37.338118 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:59:37.338129 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:59:37.338139 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:59:37.338150 | orchestrator | changed: [testbed-manager] 2025-05-13 19:59:37.338161 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:59:37.338178 | orchestrator | 2025-05-13 19:59:37.338189 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-05-13 19:59:37.338200 | orchestrator | Tuesday 13 May 2025 19:59:24 +0000 (0:00:58.470) 0:02:36.178 *********** 2025-05-13 19:59:37.338211 | orchestrator | ok: [testbed-node-0] 2025-05-13 19:59:37.338222 | orchestrator | ok: [testbed-node-1] 2025-05-13 19:59:37.338233 | orchestrator | ok: [testbed-manager] 2025-05-13 19:59:37.338244 | orchestrator | ok: [testbed-node-2] 2025-05-13 19:59:37.338254 | orchestrator | ok: [testbed-node-3] 2025-05-13 19:59:37.338265 | orchestrator | ok: [testbed-node-4] 2025-05-13 19:59:37.338276 | orchestrator | ok: [testbed-node-5] 2025-05-13 19:59:37.338286 | orchestrator | 2025-05-13 19:59:37.338297 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-05-13 19:59:37.338308 | orchestrator | Tuesday 13 May 2025 19:59:26 +0000 (0:00:01.820) 0:02:37.998 *********** 2025-05-13 19:59:37.338319 | orchestrator | changed: [testbed-manager] 2025-05-13 19:59:37.338329 | orchestrator | changed: [testbed-node-0] 2025-05-13 19:59:37.338340 | orchestrator | changed: [testbed-node-2] 2025-05-13 19:59:37.338350 | orchestrator | changed: [testbed-node-3] 2025-05-13 19:59:37.338361 | orchestrator | changed: [testbed-node-4] 2025-05-13 19:59:37.338372 | orchestrator | changed: [testbed-node-5] 2025-05-13 19:59:37.338382 | orchestrator | changed: [testbed-node-1] 2025-05-13 19:59:37.338393 | orchestrator | 2025-05-13 19:59:37.338404 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 19:59:37.338416 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-13 19:59:37.338427 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-13 19:59:37.338446 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-13 19:59:37.338457 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-13 19:59:37.338468 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-13 19:59:37.338479 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-13 19:59:37.338561 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-13 19:59:37.338572 | orchestrator | 2025-05-13 19:59:37.338582 | orchestrator | 2025-05-13 19:59:37.338593 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 19:59:37.338604 | orchestrator | Tuesday 13 May 2025 19:59:34 +0000 (0:00:08.655) 0:02:46.654 *********** 2025-05-13 19:59:37.338615 | orchestrator | =============================================================================== 2025-05-13 19:59:37.338625 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 58.47s 2025-05-13 19:59:37.338636 | orchestrator | common : Restart fluentd container ------------------------------------- 44.50s 2025-05-13 19:59:37.338646 | orchestrator | common : Restart cron container ----------------------------------------- 8.66s 2025-05-13 19:59:37.338657 | orchestrator | common : Copying over config.json files for services -------------------- 5.32s 2025-05-13 19:59:37.338668 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.66s 2025-05-13 19:59:37.338678 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.35s 2025-05-13 19:59:37.338688 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.06s 2025-05-13 19:59:37.338699 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.77s 2025-05-13 19:59:37.338717 | orchestrator | common : Check common containers ---------------------------------------- 3.58s 2025-05-13 19:59:37.338750 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.57s 2025-05-13 19:59:37.338762 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.22s 2025-05-13 19:59:37.338773 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.57s 2025-05-13 19:59:37.338783 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.10s 2025-05-13 19:59:37.338794 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.90s 2025-05-13 19:59:37.338805 | orchestrator | common : Creating log volume -------------------------------------------- 1.86s 2025-05-13 19:59:37.338815 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.82s 2025-05-13 19:59:37.338826 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.55s 2025-05-13 19:59:37.338843 | orchestrator | common : include_tasks -------------------------------------------------- 1.47s 2025-05-13 19:59:37.338854 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.37s 2025-05-13 19:59:37.338865 | orchestrator | common : include_tasks -------------------------------------------------- 1.34s 2025-05-13 19:59:37.339017 | orchestrator | 2025-05-13 19:59:37 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:59:40.412359 | orchestrator | 2025-05-13 19:59:40 | INFO  | Task e96bd524-31b3-4a6a-bbda-2e10e40e18e5 is in state STARTED 2025-05-13 19:59:40.412450 | orchestrator | 2025-05-13 19:59:40 | INFO  | Task d642b4cc-fd13-40ca-bf48-e52ce46b7a31 is in state STARTED 2025-05-13 19:59:40.412732 | orchestrator | 2025-05-13 19:59:40 | INFO  | Task a715fec6-2905-4ca4-831f-9aff84ab1886 is in state STARTED 2025-05-13 19:59:40.413552 | orchestrator | 2025-05-13 19:59:40 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:59:40.414222 | orchestrator | 2025-05-13 19:59:40 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:59:40.417122 | orchestrator | 2025-05-13 19:59:40 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 19:59:40.417137 | orchestrator | 2025-05-13 19:59:40 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:59:43.454811 | orchestrator | 2025-05-13 19:59:43 | INFO  | Task e96bd524-31b3-4a6a-bbda-2e10e40e18e5 is in state STARTED 2025-05-13 19:59:43.455946 | orchestrator | 2025-05-13 19:59:43 | INFO  | Task d642b4cc-fd13-40ca-bf48-e52ce46b7a31 is in state STARTED 2025-05-13 19:59:43.460930 | orchestrator | 2025-05-13 19:59:43 | INFO  | Task a715fec6-2905-4ca4-831f-9aff84ab1886 is in state STARTED 2025-05-13 19:59:43.461799 | orchestrator | 2025-05-13 19:59:43 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:59:43.463973 | orchestrator | 2025-05-13 19:59:43 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:59:43.464517 | orchestrator | 2025-05-13 19:59:43 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 19:59:43.464538 | orchestrator | 2025-05-13 19:59:43 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:59:46.503846 | orchestrator | 2025-05-13 19:59:46 | INFO  | Task e96bd524-31b3-4a6a-bbda-2e10e40e18e5 is in state STARTED 2025-05-13 19:59:46.503963 | orchestrator | 2025-05-13 19:59:46 | INFO  | Task d642b4cc-fd13-40ca-bf48-e52ce46b7a31 is in state STARTED 2025-05-13 19:59:46.503978 | orchestrator | 2025-05-13 19:59:46 | INFO  | Task a715fec6-2905-4ca4-831f-9aff84ab1886 is in state STARTED 2025-05-13 19:59:46.503990 | orchestrator | 2025-05-13 19:59:46 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:59:46.504718 | orchestrator | 2025-05-13 19:59:46 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:59:46.505300 | orchestrator | 2025-05-13 19:59:46 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 19:59:46.505336 | orchestrator | 2025-05-13 19:59:46 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:59:49.535854 | orchestrator | 2025-05-13 19:59:49 | INFO  | Task e96bd524-31b3-4a6a-bbda-2e10e40e18e5 is in state STARTED 2025-05-13 19:59:49.536804 | orchestrator | 2025-05-13 19:59:49 | INFO  | Task d642b4cc-fd13-40ca-bf48-e52ce46b7a31 is in state STARTED 2025-05-13 19:59:49.537427 | orchestrator | 2025-05-13 19:59:49 | INFO  | Task a715fec6-2905-4ca4-831f-9aff84ab1886 is in state STARTED 2025-05-13 19:59:49.538777 | orchestrator | 2025-05-13 19:59:49 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:59:49.544701 | orchestrator | 2025-05-13 19:59:49 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:59:49.545054 | orchestrator | 2025-05-13 19:59:49 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 19:59:49.545171 | orchestrator | 2025-05-13 19:59:49 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:59:52.592264 | orchestrator | 2025-05-13 19:59:52 | INFO  | Task e96bd524-31b3-4a6a-bbda-2e10e40e18e5 is in state STARTED 2025-05-13 19:59:52.598858 | orchestrator | 2025-05-13 19:59:52 | INFO  | Task d642b4cc-fd13-40ca-bf48-e52ce46b7a31 is in state STARTED 2025-05-13 19:59:52.600335 | orchestrator | 2025-05-13 19:59:52 | INFO  | Task a715fec6-2905-4ca4-831f-9aff84ab1886 is in state STARTED 2025-05-13 19:59:52.601969 | orchestrator | 2025-05-13 19:59:52 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:59:52.603642 | orchestrator | 2025-05-13 19:59:52 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:59:52.604231 | orchestrator | 2025-05-13 19:59:52 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 19:59:52.604252 | orchestrator | 2025-05-13 19:59:52 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:59:55.660808 | orchestrator | 2025-05-13 19:59:55 | INFO  | Task e96bd524-31b3-4a6a-bbda-2e10e40e18e5 is in state STARTED 2025-05-13 19:59:55.662421 | orchestrator | 2025-05-13 19:59:55 | INFO  | Task d642b4cc-fd13-40ca-bf48-e52ce46b7a31 is in state STARTED 2025-05-13 19:59:55.664709 | orchestrator | 2025-05-13 19:59:55 | INFO  | Task a715fec6-2905-4ca4-831f-9aff84ab1886 is in state STARTED 2025-05-13 19:59:55.664762 | orchestrator | 2025-05-13 19:59:55 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:59:55.665903 | orchestrator | 2025-05-13 19:59:55 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:59:55.667785 | orchestrator | 2025-05-13 19:59:55 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 19:59:55.667848 | orchestrator | 2025-05-13 19:59:55 | INFO  | Wait 1 second(s) until the next check 2025-05-13 19:59:58.718703 | orchestrator | 2025-05-13 19:59:58 | INFO  | Task e96bd524-31b3-4a6a-bbda-2e10e40e18e5 is in state STARTED 2025-05-13 19:59:58.719048 | orchestrator | 2025-05-13 19:59:58 | INFO  | Task d642b4cc-fd13-40ca-bf48-e52ce46b7a31 is in state STARTED 2025-05-13 19:59:58.720370 | orchestrator | 2025-05-13 19:59:58 | INFO  | Task a715fec6-2905-4ca4-831f-9aff84ab1886 is in state SUCCESS 2025-05-13 19:59:58.721449 | orchestrator | 2025-05-13 19:59:58 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 19:59:58.723038 | orchestrator | 2025-05-13 19:59:58 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 19:59:58.723613 | orchestrator | 2025-05-13 19:59:58 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 19:59:58.725216 | orchestrator | 2025-05-13 19:59:58 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 19:59:58.725238 | orchestrator | 2025-05-13 19:59:58 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:00:01.772609 | orchestrator | 2025-05-13 20:00:01 | INFO  | Task e96bd524-31b3-4a6a-bbda-2e10e40e18e5 is in state STARTED 2025-05-13 20:00:01.772710 | orchestrator | 2025-05-13 20:00:01 | INFO  | Task d642b4cc-fd13-40ca-bf48-e52ce46b7a31 is in state STARTED 2025-05-13 20:00:01.773496 | orchestrator | 2025-05-13 20:00:01 | INFO  | Task a11fa720-efd5-49e4-b90a-74159a989dc5 is in state STARTED 2025-05-13 20:00:01.774083 | orchestrator | 2025-05-13 20:00:01 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:00:01.775026 | orchestrator | 2025-05-13 20:00:01 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:00:01.775888 | orchestrator | 2025-05-13 20:00:01 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:00:01.777334 | orchestrator | 2025-05-13 20:00:01 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:00:01.777473 | orchestrator | 2025-05-13 20:00:01 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:00:04.828437 | orchestrator | 2025-05-13 20:00:04 | INFO  | Task e96bd524-31b3-4a6a-bbda-2e10e40e18e5 is in state STARTED 2025-05-13 20:00:04.828655 | orchestrator | 2025-05-13 20:00:04 | INFO  | Task d642b4cc-fd13-40ca-bf48-e52ce46b7a31 is in state STARTED 2025-05-13 20:00:04.829282 | orchestrator | 2025-05-13 20:00:04 | INFO  | Task a11fa720-efd5-49e4-b90a-74159a989dc5 is in state STARTED 2025-05-13 20:00:04.830052 | orchestrator | 2025-05-13 20:00:04 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:00:04.831207 | orchestrator | 2025-05-13 20:00:04 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:00:04.832100 | orchestrator | 2025-05-13 20:00:04 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:00:04.833754 | orchestrator | 2025-05-13 20:00:04 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:00:04.833789 | orchestrator | 2025-05-13 20:00:04 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:00:07.876626 | orchestrator | 2025-05-13 20:00:07 | INFO  | Task e96bd524-31b3-4a6a-bbda-2e10e40e18e5 is in state STARTED 2025-05-13 20:00:07.878621 | orchestrator | 2025-05-13 20:00:07 | INFO  | Task d642b4cc-fd13-40ca-bf48-e52ce46b7a31 is in state SUCCESS 2025-05-13 20:00:07.879934 | orchestrator | 2025-05-13 20:00:07.879980 | orchestrator | 2025-05-13 20:00:07.880000 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:00:07.880011 | orchestrator | 2025-05-13 20:00:07.880022 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:00:07.880033 | orchestrator | Tuesday 13 May 2025 19:59:40 +0000 (0:00:00.375) 0:00:00.375 *********** 2025-05-13 20:00:07.880043 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:00:07.880054 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:00:07.880064 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:00:07.880074 | orchestrator | 2025-05-13 20:00:07.880085 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:00:07.880096 | orchestrator | Tuesday 13 May 2025 19:59:41 +0000 (0:00:00.638) 0:00:01.014 *********** 2025-05-13 20:00:07.880108 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-05-13 20:00:07.880139 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-05-13 20:00:07.880151 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-05-13 20:00:07.880162 | orchestrator | 2025-05-13 20:00:07.880173 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-05-13 20:00:07.880180 | orchestrator | 2025-05-13 20:00:07.880187 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-05-13 20:00:07.880194 | orchestrator | Tuesday 13 May 2025 19:59:42 +0000 (0:00:00.918) 0:00:01.932 *********** 2025-05-13 20:00:07.880220 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:00:07.880229 | orchestrator | 2025-05-13 20:00:07.880235 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-05-13 20:00:07.880242 | orchestrator | Tuesday 13 May 2025 19:59:42 +0000 (0:00:00.776) 0:00:02.709 *********** 2025-05-13 20:00:07.880249 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-13 20:00:07.880256 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-13 20:00:07.880263 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-13 20:00:07.880270 | orchestrator | 2025-05-13 20:00:07.880277 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-05-13 20:00:07.880283 | orchestrator | Tuesday 13 May 2025 19:59:44 +0000 (0:00:01.131) 0:00:03.840 *********** 2025-05-13 20:00:07.880290 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-13 20:00:07.880297 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-13 20:00:07.880304 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-13 20:00:07.880310 | orchestrator | 2025-05-13 20:00:07.880317 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-05-13 20:00:07.880324 | orchestrator | Tuesday 13 May 2025 19:59:46 +0000 (0:00:02.022) 0:00:05.863 *********** 2025-05-13 20:00:07.880330 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:00:07.880337 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:00:07.880343 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:00:07.880350 | orchestrator | 2025-05-13 20:00:07.880356 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-05-13 20:00:07.880363 | orchestrator | Tuesday 13 May 2025 19:59:48 +0000 (0:00:02.452) 0:00:08.315 *********** 2025-05-13 20:00:07.880369 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:00:07.880376 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:00:07.880383 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:00:07.880393 | orchestrator | 2025-05-13 20:00:07.880403 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:00:07.880426 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:00:07.880439 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:00:07.880450 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:00:07.880460 | orchestrator | 2025-05-13 20:00:07.880467 | orchestrator | 2025-05-13 20:00:07.880476 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:00:07.880523 | orchestrator | Tuesday 13 May 2025 19:59:56 +0000 (0:00:07.487) 0:00:15.803 *********** 2025-05-13 20:00:07.880534 | orchestrator | =============================================================================== 2025-05-13 20:00:07.880544 | orchestrator | memcached : Restart memcached container --------------------------------- 7.49s 2025-05-13 20:00:07.880553 | orchestrator | memcached : Check memcached container ----------------------------------- 2.45s 2025-05-13 20:00:07.880563 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.02s 2025-05-13 20:00:07.880572 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.13s 2025-05-13 20:00:07.880592 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.92s 2025-05-13 20:00:07.880601 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.78s 2025-05-13 20:00:07.880611 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.64s 2025-05-13 20:00:07.880620 | orchestrator | 2025-05-13 20:00:07.880630 | orchestrator | 2025-05-13 20:00:07.880639 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:00:07.880649 | orchestrator | 2025-05-13 20:00:07.880658 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:00:07.880668 | orchestrator | Tuesday 13 May 2025 19:59:40 +0000 (0:00:00.693) 0:00:00.693 *********** 2025-05-13 20:00:07.880677 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:00:07.880687 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:00:07.880696 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:00:07.880706 | orchestrator | 2025-05-13 20:00:07.880715 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:00:07.880745 | orchestrator | Tuesday 13 May 2025 19:59:41 +0000 (0:00:00.659) 0:00:01.353 *********** 2025-05-13 20:00:07.880756 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-05-13 20:00:07.880765 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-05-13 20:00:07.880775 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-05-13 20:00:07.880784 | orchestrator | 2025-05-13 20:00:07.880793 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-05-13 20:00:07.880803 | orchestrator | 2025-05-13 20:00:07.880812 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-05-13 20:00:07.880821 | orchestrator | Tuesday 13 May 2025 19:59:42 +0000 (0:00:00.843) 0:00:02.197 *********** 2025-05-13 20:00:07.880831 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:00:07.880840 | orchestrator | 2025-05-13 20:00:07.880850 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-05-13 20:00:07.880859 | orchestrator | Tuesday 13 May 2025 19:59:43 +0000 (0:00:01.049) 0:00:03.247 *********** 2025-05-13 20:00:07.880872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-13 20:00:07.880886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-13 20:00:07.880896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-13 20:00:07.880908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-13 20:00:07.880925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-13 20:00:07.880947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-13 20:00:07.880958 | orchestrator | 2025-05-13 20:00:07.880969 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-05-13 20:00:07.880979 | orchestrator | Tuesday 13 May 2025 19:59:45 +0000 (0:00:01.740) 0:00:04.987 *********** 2025-05-13 20:00:07.880989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-13 20:00:07.881000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-13 20:00:07.881010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-13 20:00:07.881026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-13 20:00:07.881037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-13 20:00:07.881057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-13 20:00:07.881068 | orchestrator | 2025-05-13 20:00:07.881078 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-05-13 20:00:07.881088 | orchestrator | Tuesday 13 May 2025 19:59:48 +0000 (0:00:02.937) 0:00:07.924 *********** 2025-05-13 20:00:07.881098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-13 20:00:07.881108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-13 20:00:07.881119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-13 20:00:07.881134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-13 20:00:07.881145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-13 20:00:07.881155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-13 20:00:07.881165 | orchestrator | 2025-05-13 20:00:07.881183 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-05-13 20:00:07.881194 | orchestrator | Tuesday 13 May 2025 19:59:51 +0000 (0:00:03.158) 0:00:11.082 *********** 2025-05-13 20:00:07.881204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-13 20:00:07.881214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-13 20:00:07.881225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-13 20:00:07.881240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-13 20:00:07.881250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-13 20:00:07.881260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-13 20:00:07.881270 | orchestrator | 2025-05-13 20:00:07.881280 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-13 20:00:07.881289 | orchestrator | Tuesday 13 May 2025 19:59:53 +0000 (0:00:01.947) 0:00:13.030 *********** 2025-05-13 20:00:07.881299 | orchestrator | 2025-05-13 20:00:07.881308 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-13 20:00:07.881327 | orchestrator | Tuesday 13 May 2025 19:59:53 +0000 (0:00:00.060) 0:00:13.091 *********** 2025-05-13 20:00:07.881337 | orchestrator | 2025-05-13 20:00:07.881347 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-13 20:00:07.881357 | orchestrator | Tuesday 13 May 2025 19:59:53 +0000 (0:00:00.059) 0:00:13.150 *********** 2025-05-13 20:00:07.881366 | orchestrator | 2025-05-13 20:00:07.881376 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-05-13 20:00:07.881385 | orchestrator | Tuesday 13 May 2025 19:59:53 +0000 (0:00:00.063) 0:00:13.213 *********** 2025-05-13 20:00:07.881395 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:00:07.881404 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:00:07.881414 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:00:07.881423 | orchestrator | 2025-05-13 20:00:07.881433 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-05-13 20:00:07.881442 | orchestrator | Tuesday 13 May 2025 19:59:57 +0000 (0:00:04.019) 0:00:17.233 *********** 2025-05-13 20:00:07.881452 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:00:07.881461 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:00:07.881471 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:00:07.881507 | orchestrator | 2025-05-13 20:00:07.881519 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:00:07.881529 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:00:07.881546 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:00:07.881556 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:00:07.881565 | orchestrator | 2025-05-13 20:00:07.881575 | orchestrator | 2025-05-13 20:00:07.881589 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:00:07.881600 | orchestrator | Tuesday 13 May 2025 20:00:05 +0000 (0:00:08.437) 0:00:25.670 *********** 2025-05-13 20:00:07.881610 | orchestrator | =============================================================================== 2025-05-13 20:00:07.881619 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.44s 2025-05-13 20:00:07.881629 | orchestrator | redis : Restart redis container ----------------------------------------- 4.02s 2025-05-13 20:00:07.881639 | orchestrator | redis : Copying over redis config files --------------------------------- 3.16s 2025-05-13 20:00:07.881648 | orchestrator | redis : Copying over default config.json files -------------------------- 2.94s 2025-05-13 20:00:07.881658 | orchestrator | redis : Check redis containers ------------------------------------------ 1.95s 2025-05-13 20:00:07.881667 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.74s 2025-05-13 20:00:07.881677 | orchestrator | redis : include_tasks --------------------------------------------------- 1.05s 2025-05-13 20:00:07.881686 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.84s 2025-05-13 20:00:07.881696 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.66s 2025-05-13 20:00:07.881705 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.18s 2025-05-13 20:00:07.881820 | orchestrator | 2025-05-13 20:00:07 | INFO  | Task a11fa720-efd5-49e4-b90a-74159a989dc5 is in state STARTED 2025-05-13 20:00:07.881834 | orchestrator | 2025-05-13 20:00:07 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:00:07.881844 | orchestrator | 2025-05-13 20:00:07 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:00:07.882126 | orchestrator | 2025-05-13 20:00:07 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:00:07.882756 | orchestrator | 2025-05-13 20:00:07 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:00:07.882867 | orchestrator | 2025-05-13 20:00:07 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:00:10.925577 | orchestrator | 2025-05-13 20:00:10 | INFO  | Task e96bd524-31b3-4a6a-bbda-2e10e40e18e5 is in state STARTED 2025-05-13 20:00:10.928703 | orchestrator | 2025-05-13 20:00:10 | INFO  | Task a11fa720-efd5-49e4-b90a-74159a989dc5 is in state STARTED 2025-05-13 20:00:10.929340 | orchestrator | 2025-05-13 20:00:10 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:00:10.929934 | orchestrator | 2025-05-13 20:00:10 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:00:10.930664 | orchestrator | 2025-05-13 20:00:10 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:00:10.932821 | orchestrator | 2025-05-13 20:00:10 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:00:10.936153 | orchestrator | 2025-05-13 20:00:10 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:00:13.982183 | orchestrator | 2025-05-13 20:00:13 | INFO  | Task e96bd524-31b3-4a6a-bbda-2e10e40e18e5 is in state STARTED 2025-05-13 20:00:13.983263 | orchestrator | 2025-05-13 20:00:13 | INFO  | Task a11fa720-efd5-49e4-b90a-74159a989dc5 is in state STARTED 2025-05-13 20:00:13.984906 | orchestrator | 2025-05-13 20:00:13 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:00:13.986419 | orchestrator | 2025-05-13 20:00:13 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:00:13.988405 | orchestrator | 2025-05-13 20:00:13 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:00:13.989318 | orchestrator | 2025-05-13 20:00:13 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:00:13.989348 | orchestrator | 2025-05-13 20:00:13 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:00:17.060872 | orchestrator | 2025-05-13 20:00:17 | INFO  | Task e96bd524-31b3-4a6a-bbda-2e10e40e18e5 is in state STARTED 2025-05-13 20:00:17.060986 | orchestrator | 2025-05-13 20:00:17 | INFO  | Task a11fa720-efd5-49e4-b90a-74159a989dc5 is in state STARTED 2025-05-13 20:00:17.061001 | orchestrator | 2025-05-13 20:00:17 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:00:17.061013 | orchestrator | 2025-05-13 20:00:17 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:00:17.061449 | orchestrator | 2025-05-13 20:00:17 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:00:17.063065 | orchestrator | 2025-05-13 20:00:17 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:00:17.063093 | orchestrator | 2025-05-13 20:00:17 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:00:20.102819 | orchestrator | 2025-05-13 20:00:20 | INFO  | Task e96bd524-31b3-4a6a-bbda-2e10e40e18e5 is in state STARTED 2025-05-13 20:00:20.102971 | orchestrator | 2025-05-13 20:00:20 | INFO  | Task a11fa720-efd5-49e4-b90a-74159a989dc5 is in state SUCCESS 2025-05-13 20:00:20.103016 | orchestrator | 2025-05-13 20:00:20 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:00:20.103851 | orchestrator | 2025-05-13 20:00:20 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:00:20.104338 | orchestrator | 2025-05-13 20:00:20 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:00:20.104851 | orchestrator | 2025-05-13 20:00:20 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:00:20.105018 | orchestrator | 2025-05-13 20:00:20 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:00:23.131398 | orchestrator | 2025-05-13 20:00:23 | INFO  | Task e96bd524-31b3-4a6a-bbda-2e10e40e18e5 is in state STARTED 2025-05-13 20:00:23.132690 | orchestrator | 2025-05-13 20:00:23 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:00:23.133575 | orchestrator | 2025-05-13 20:00:23 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:00:23.134444 | orchestrator | 2025-05-13 20:00:23 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:00:23.135844 | orchestrator | 2025-05-13 20:00:23 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:00:23.135872 | orchestrator | 2025-05-13 20:00:23 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:00:26.168945 | orchestrator | 2025-05-13 20:00:26 | INFO  | Task e96bd524-31b3-4a6a-bbda-2e10e40e18e5 is in state STARTED 2025-05-13 20:00:26.169192 | orchestrator | 2025-05-13 20:00:26 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:00:26.169943 | orchestrator | 2025-05-13 20:00:26 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:00:26.170839 | orchestrator | 2025-05-13 20:00:26 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:00:26.171743 | orchestrator | 2025-05-13 20:00:26 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:00:26.171780 | orchestrator | 2025-05-13 20:00:26 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:00:29.212065 | orchestrator | 2025-05-13 20:00:29 | INFO  | Task e96bd524-31b3-4a6a-bbda-2e10e40e18e5 is in state STARTED 2025-05-13 20:00:29.213166 | orchestrator | 2025-05-13 20:00:29 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:00:29.214241 | orchestrator | 2025-05-13 20:00:29 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:00:29.215789 | orchestrator | 2025-05-13 20:00:29 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:00:29.216897 | orchestrator | 2025-05-13 20:00:29 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:00:29.216928 | orchestrator | 2025-05-13 20:00:29 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:00:32.255443 | orchestrator | 2025-05-13 20:00:32 | INFO  | Task e96bd524-31b3-4a6a-bbda-2e10e40e18e5 is in state STARTED 2025-05-13 20:00:32.255900 | orchestrator | 2025-05-13 20:00:32 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:00:32.257012 | orchestrator | 2025-05-13 20:00:32 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:00:32.257715 | orchestrator | 2025-05-13 20:00:32 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:00:32.258543 | orchestrator | 2025-05-13 20:00:32 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:00:32.258642 | orchestrator | 2025-05-13 20:00:32 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:00:35.294713 | orchestrator | 2025-05-13 20:00:35 | INFO  | Task e96bd524-31b3-4a6a-bbda-2e10e40e18e5 is in state STARTED 2025-05-13 20:00:35.294888 | orchestrator | 2025-05-13 20:00:35 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:00:35.294914 | orchestrator | 2025-05-13 20:00:35 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:00:35.295013 | orchestrator | 2025-05-13 20:00:35 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:00:35.295791 | orchestrator | 2025-05-13 20:00:35 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:00:35.295914 | orchestrator | 2025-05-13 20:00:35 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:00:38.341934 | orchestrator | 2025-05-13 20:00:38 | INFO  | Task e96bd524-31b3-4a6a-bbda-2e10e40e18e5 is in state STARTED 2025-05-13 20:00:38.342699 | orchestrator | 2025-05-13 20:00:38 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:00:38.343631 | orchestrator | 2025-05-13 20:00:38 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:00:38.344499 | orchestrator | 2025-05-13 20:00:38 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:00:38.352103 | orchestrator | 2025-05-13 20:00:38 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:00:38.352155 | orchestrator | 2025-05-13 20:00:38 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:00:41.406765 | orchestrator | 2025-05-13 20:00:41 | INFO  | Task e96bd524-31b3-4a6a-bbda-2e10e40e18e5 is in state STARTED 2025-05-13 20:00:41.406891 | orchestrator | 2025-05-13 20:00:41 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:00:41.407261 | orchestrator | 2025-05-13 20:00:41 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:00:41.408213 | orchestrator | 2025-05-13 20:00:41 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:00:41.409045 | orchestrator | 2025-05-13 20:00:41 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:00:41.409096 | orchestrator | 2025-05-13 20:00:41 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:00:44.462312 | orchestrator | 2025-05-13 20:00:44 | INFO  | Task e96bd524-31b3-4a6a-bbda-2e10e40e18e5 is in state STARTED 2025-05-13 20:00:44.469478 | orchestrator | 2025-05-13 20:00:44 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:00:44.472485 | orchestrator | 2025-05-13 20:00:44 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:00:44.475867 | orchestrator | 2025-05-13 20:00:44 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:00:44.476791 | orchestrator | 2025-05-13 20:00:44 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:00:44.478135 | orchestrator | 2025-05-13 20:00:44 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:00:47.525816 | orchestrator | 2025-05-13 20:00:47 | INFO  | Task e96bd524-31b3-4a6a-bbda-2e10e40e18e5 is in state STARTED 2025-05-13 20:00:47.527020 | orchestrator | 2025-05-13 20:00:47 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:00:47.529362 | orchestrator | 2025-05-13 20:00:47 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:00:47.532617 | orchestrator | 2025-05-13 20:00:47 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:00:47.533156 | orchestrator | 2025-05-13 20:00:47 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:00:47.536128 | orchestrator | 2025-05-13 20:00:47 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:00:50.571769 | orchestrator | 2025-05-13 20:00:50 | INFO  | Task e96bd524-31b3-4a6a-bbda-2e10e40e18e5 is in state STARTED 2025-05-13 20:00:50.573034 | orchestrator | 2025-05-13 20:00:50 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:00:50.577268 | orchestrator | 2025-05-13 20:00:50 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:00:50.579644 | orchestrator | 2025-05-13 20:00:50 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:00:50.580668 | orchestrator | 2025-05-13 20:00:50 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:00:50.580736 | orchestrator | 2025-05-13 20:00:50 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:00:53.635558 | orchestrator | 2025-05-13 20:00:53.635673 | orchestrator | None 2025-05-13 20:00:53.635689 | orchestrator | 2025-05-13 20:00:53.635701 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:00:53.635712 | orchestrator | 2025-05-13 20:00:53.635723 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:00:53.635735 | orchestrator | Tuesday 13 May 2025 19:59:40 +0000 (0:00:00.500) 0:00:00.500 *********** 2025-05-13 20:00:53.635781 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:00:53.635795 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:00:53.635806 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:00:53.635818 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:00:53.635837 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:00:53.635872 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:00:53.635883 | orchestrator | 2025-05-13 20:00:53.635894 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:00:53.635905 | orchestrator | Tuesday 13 May 2025 19:59:41 +0000 (0:00:01.143) 0:00:01.643 *********** 2025-05-13 20:00:53.635916 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-13 20:00:53.635928 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-13 20:00:53.635939 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-13 20:00:53.635950 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-13 20:00:53.635960 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-13 20:00:53.635971 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-13 20:00:53.635982 | orchestrator | 2025-05-13 20:00:53.635993 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-05-13 20:00:53.636003 | orchestrator | 2025-05-13 20:00:53.636014 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-05-13 20:00:53.636025 | orchestrator | Tuesday 13 May 2025 19:59:42 +0000 (0:00:01.030) 0:00:02.674 *********** 2025-05-13 20:00:53.636036 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:00:53.636048 | orchestrator | 2025-05-13 20:00:53.636059 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-13 20:00:53.636070 | orchestrator | Tuesday 13 May 2025 19:59:44 +0000 (0:00:01.902) 0:00:04.576 *********** 2025-05-13 20:00:53.636081 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-13 20:00:53.636092 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-13 20:00:53.636103 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-13 20:00:53.636114 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-13 20:00:53.636124 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-13 20:00:53.636135 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-13 20:00:53.636145 | orchestrator | 2025-05-13 20:00:53.636156 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-13 20:00:53.636167 | orchestrator | Tuesday 13 May 2025 19:59:46 +0000 (0:00:01.556) 0:00:06.132 *********** 2025-05-13 20:00:53.636177 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-13 20:00:53.636188 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-13 20:00:53.636199 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-13 20:00:53.636209 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-13 20:00:53.636220 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-13 20:00:53.636231 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-13 20:00:53.636241 | orchestrator | 2025-05-13 20:00:53.636252 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-13 20:00:53.636262 | orchestrator | Tuesday 13 May 2025 19:59:48 +0000 (0:00:02.104) 0:00:08.237 *********** 2025-05-13 20:00:53.636273 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-05-13 20:00:53.636284 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:00:53.636295 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-05-13 20:00:53.636306 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:00:53.636316 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-05-13 20:00:53.636327 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:00:53.636337 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-05-13 20:00:53.636361 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-05-13 20:00:53.636373 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:00:53.636391 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:00:53.636478 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-05-13 20:00:53.636499 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:00:53.636588 | orchestrator | 2025-05-13 20:00:53.636610 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-05-13 20:00:53.636629 | orchestrator | Tuesday 13 May 2025 19:59:50 +0000 (0:00:02.215) 0:00:10.452 *********** 2025-05-13 20:00:53.636648 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:00:53.636665 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:00:53.636684 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:00:53.636703 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:00:53.636721 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:00:53.636739 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:00:53.636757 | orchestrator | 2025-05-13 20:00:53.636774 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-05-13 20:00:53.636793 | orchestrator | Tuesday 13 May 2025 19:59:51 +0000 (0:00:01.005) 0:00:11.457 *********** 2025-05-13 20:00:53.636843 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 20:00:53.636872 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 20:00:53.636894 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 20:00:53.636917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 20:00:53.636953 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 20:00:53.636977 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637010 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637139 | orchestrator | 2025-05-13 20:00:53.637150 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-05-13 20:00:53.637161 | orchestrator | Tuesday 13 May 2025 19:59:53 +0000 (0:00:02.040) 0:00:13.498 *********** 2025-05-13 20:00:53.637172 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637184 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637195 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637230 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637272 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637295 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637362 | orchestrator | 2025-05-13 20:00:53.637373 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-05-13 20:00:53.637384 | orchestrator | Tuesday 13 May 2025 19:59:57 +0000 (0:00:03.852) 0:00:17.351 *********** 2025-05-13 20:00:53.637395 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:00:53.637406 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:00:53.637416 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:00:53.637427 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:00:53.637438 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:00:53.637448 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:00:53.637459 | orchestrator | 2025-05-13 20:00:53.637512 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-05-13 20:00:53.637526 | orchestrator | Tuesday 13 May 2025 19:59:59 +0000 (0:00:01.570) 0:00:18.922 *********** 2025-05-13 20:00:53.637570 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637648 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637671 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637693 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637728 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637758 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-13 20:00:53.637835 | orchestrator | 2025-05-13 20:00:53.637846 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-13 20:00:53.637864 | orchestrator | Tuesday 13 May 2025 20:00:01 +0000 (0:00:02.726) 0:00:21.648 *********** 2025-05-13 20:00:53.637875 | orchestrator | 2025-05-13 20:00:53.637886 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-13 20:00:53.637897 | orchestrator | Tuesday 13 May 2025 20:00:01 +0000 (0:00:00.153) 0:00:21.801 *********** 2025-05-13 20:00:53.637908 | orchestrator | 2025-05-13 20:00:53.637918 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-13 20:00:53.637929 | orchestrator | Tuesday 13 May 2025 20:00:02 +0000 (0:00:00.143) 0:00:21.945 *********** 2025-05-13 20:00:53.637940 | orchestrator | 2025-05-13 20:00:53.637950 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-13 20:00:53.637961 | orchestrator | Tuesday 13 May 2025 20:00:02 +0000 (0:00:00.138) 0:00:22.083 *********** 2025-05-13 20:00:53.637972 | orchestrator | 2025-05-13 20:00:53.637982 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-13 20:00:53.637993 | orchestrator | Tuesday 13 May 2025 20:00:02 +0000 (0:00:00.152) 0:00:22.236 *********** 2025-05-13 20:00:53.638004 | orchestrator | 2025-05-13 20:00:53.638072 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-13 20:00:53.638098 | orchestrator | Tuesday 13 May 2025 20:00:02 +0000 (0:00:00.301) 0:00:22.537 *********** 2025-05-13 20:00:53.638112 | orchestrator | 2025-05-13 20:00:53.638123 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-05-13 20:00:53.638134 | orchestrator | Tuesday 13 May 2025 20:00:03 +0000 (0:00:00.789) 0:00:23.326 *********** 2025-05-13 20:00:53.638145 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:00:53.638156 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:00:53.638167 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:00:53.638178 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:00:53.638189 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:00:53.638200 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:00:53.638210 | orchestrator | 2025-05-13 20:00:53.638221 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-05-13 20:00:53.638232 | orchestrator | Tuesday 13 May 2025 20:00:15 +0000 (0:00:11.598) 0:00:34.925 *********** 2025-05-13 20:00:53.638243 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:00:53.638255 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:00:53.638265 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:00:53.638276 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:00:53.638287 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:00:53.638298 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:00:53.638308 | orchestrator | 2025-05-13 20:00:53.638319 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-13 20:00:53.638330 | orchestrator | Tuesday 13 May 2025 20:00:18 +0000 (0:00:02.932) 0:00:37.857 *********** 2025-05-13 20:00:53.638347 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:00:53.638358 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:00:53.638369 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:00:53.638379 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:00:53.638390 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:00:53.638401 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:00:53.638412 | orchestrator | 2025-05-13 20:00:53.638422 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-05-13 20:00:53.638433 | orchestrator | Tuesday 13 May 2025 20:00:28 +0000 (0:00:10.029) 0:00:47.887 *********** 2025-05-13 20:00:53.638444 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-05-13 20:00:53.638455 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-05-13 20:00:53.638466 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-05-13 20:00:53.638477 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-05-13 20:00:53.638499 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-05-13 20:00:53.638520 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-05-13 20:00:53.638531 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-05-13 20:00:53.638591 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-05-13 20:00:53.638602 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-05-13 20:00:53.638613 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-05-13 20:00:53.638624 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-05-13 20:00:53.638634 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-05-13 20:00:53.638645 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-13 20:00:53.638656 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-13 20:00:53.638667 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-13 20:00:53.638677 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-13 20:00:53.638688 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-13 20:00:53.638699 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-13 20:00:53.638709 | orchestrator | 2025-05-13 20:00:53.638720 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-05-13 20:00:53.638731 | orchestrator | Tuesday 13 May 2025 20:00:35 +0000 (0:00:07.369) 0:00:55.256 *********** 2025-05-13 20:00:53.638742 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-05-13 20:00:53.638753 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-05-13 20:00:53.638764 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:00:53.638774 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-05-13 20:00:53.638785 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:00:53.638796 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:00:53.638806 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-05-13 20:00:53.638817 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-05-13 20:00:53.638828 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-05-13 20:00:53.638838 | orchestrator | 2025-05-13 20:00:53.638849 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-05-13 20:00:53.638860 | orchestrator | Tuesday 13 May 2025 20:00:38 +0000 (0:00:02.807) 0:00:58.064 *********** 2025-05-13 20:00:53.638870 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-05-13 20:00:53.638881 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:00:53.638892 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-05-13 20:00:53.638903 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:00:53.638914 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-05-13 20:00:53.638924 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:00:53.638935 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-05-13 20:00:53.638946 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-05-13 20:00:53.638957 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-05-13 20:00:53.638977 | orchestrator | 2025-05-13 20:00:53.638988 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-13 20:00:53.638998 | orchestrator | Tuesday 13 May 2025 20:00:42 +0000 (0:00:04.405) 0:01:02.469 *********** 2025-05-13 20:00:53.639009 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:00:53.639020 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:00:53.639031 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:00:53.639047 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:00:53.639058 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:00:53.639069 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:00:53.639080 | orchestrator | 2025-05-13 20:00:53.639090 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:00:53.639102 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-13 20:00:53.639113 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-13 20:00:53.639124 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-13 20:00:53.639135 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 20:00:53.639146 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 20:00:53.639164 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 20:00:53.639175 | orchestrator | 2025-05-13 20:00:53.639186 | orchestrator | 2025-05-13 20:00:53.639197 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:00:53.639208 | orchestrator | Tuesday 13 May 2025 20:00:50 +0000 (0:00:07.920) 0:01:10.390 *********** 2025-05-13 20:00:53.639219 | orchestrator | =============================================================================== 2025-05-13 20:00:53.639230 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.95s 2025-05-13 20:00:53.639241 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.60s 2025-05-13 20:00:53.639252 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.37s 2025-05-13 20:00:53.639262 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.41s 2025-05-13 20:00:53.639273 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.85s 2025-05-13 20:00:53.639284 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.93s 2025-05-13 20:00:53.639295 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.81s 2025-05-13 20:00:53.639306 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.73s 2025-05-13 20:00:53.639317 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.22s 2025-05-13 20:00:53.639328 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.10s 2025-05-13 20:00:53.639338 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.04s 2025-05-13 20:00:53.639349 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.90s 2025-05-13 20:00:53.639359 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.68s 2025-05-13 20:00:53.639370 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.57s 2025-05-13 20:00:53.639381 | orchestrator | module-load : Load modules ---------------------------------------------- 1.56s 2025-05-13 20:00:53.639392 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.14s 2025-05-13 20:00:53.639409 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.03s 2025-05-13 20:00:53.639420 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.00s 2025-05-13 20:00:53.639431 | orchestrator | 2025-05-13 20:00:53 | INFO  | Task e96bd524-31b3-4a6a-bbda-2e10e40e18e5 is in state SUCCESS 2025-05-13 20:00:53.639442 | orchestrator | 2025-05-13 20:00:53 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:00:53.639453 | orchestrator | 2025-05-13 20:00:53 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:00:53.639470 | orchestrator | 2025-05-13 20:00:53 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:00:53.639488 | orchestrator | 2025-05-13 20:00:53 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:00:53.639507 | orchestrator | 2025-05-13 20:00:53 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:00:53.639527 | orchestrator | 2025-05-13 20:00:53 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:00:56.679427 | orchestrator | 2025-05-13 20:00:56 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:00:56.679987 | orchestrator | 2025-05-13 20:00:56 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:00:56.680486 | orchestrator | 2025-05-13 20:00:56 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:00:56.681230 | orchestrator | 2025-05-13 20:00:56 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:00:56.682136 | orchestrator | 2025-05-13 20:00:56 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:00:56.682171 | orchestrator | 2025-05-13 20:00:56 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:00:59.727822 | orchestrator | 2025-05-13 20:00:59 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:00:59.728399 | orchestrator | 2025-05-13 20:00:59 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:00:59.729965 | orchestrator | 2025-05-13 20:00:59 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:00:59.731024 | orchestrator | 2025-05-13 20:00:59 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:00:59.731999 | orchestrator | 2025-05-13 20:00:59 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:00:59.732020 | orchestrator | 2025-05-13 20:00:59 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:01:02.775437 | orchestrator | 2025-05-13 20:01:02 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:01:02.776491 | orchestrator | 2025-05-13 20:01:02 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:01:02.777077 | orchestrator | 2025-05-13 20:01:02 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:01:02.777636 | orchestrator | 2025-05-13 20:01:02 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:01:02.779088 | orchestrator | 2025-05-13 20:01:02 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:01:02.779135 | orchestrator | 2025-05-13 20:01:02 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:01:05.826344 | orchestrator | 2025-05-13 20:01:05 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:01:05.826459 | orchestrator | 2025-05-13 20:01:05 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:01:05.826725 | orchestrator | 2025-05-13 20:01:05 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:01:05.827215 | orchestrator | 2025-05-13 20:01:05 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:01:05.827672 | orchestrator | 2025-05-13 20:01:05 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:01:05.827699 | orchestrator | 2025-05-13 20:01:05 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:01:08.864248 | orchestrator | 2025-05-13 20:01:08 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:01:08.864370 | orchestrator | 2025-05-13 20:01:08 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:01:08.867357 | orchestrator | 2025-05-13 20:01:08 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:01:08.867407 | orchestrator | 2025-05-13 20:01:08 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:01:08.867421 | orchestrator | 2025-05-13 20:01:08 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:01:08.867432 | orchestrator | 2025-05-13 20:01:08 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:01:11.897312 | orchestrator | 2025-05-13 20:01:11 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:01:11.898609 | orchestrator | 2025-05-13 20:01:11 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:01:11.899637 | orchestrator | 2025-05-13 20:01:11 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:01:11.900541 | orchestrator | 2025-05-13 20:01:11 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:01:11.901920 | orchestrator | 2025-05-13 20:01:11 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:01:11.901958 | orchestrator | 2025-05-13 20:01:11 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:01:14.945797 | orchestrator | 2025-05-13 20:01:14 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:01:14.947415 | orchestrator | 2025-05-13 20:01:14 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:01:14.949969 | orchestrator | 2025-05-13 20:01:14 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:01:14.953477 | orchestrator | 2025-05-13 20:01:14 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:01:14.956260 | orchestrator | 2025-05-13 20:01:14 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:01:14.956310 | orchestrator | 2025-05-13 20:01:14 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:01:17.999978 | orchestrator | 2025-05-13 20:01:17 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:01:18.000275 | orchestrator | 2025-05-13 20:01:17 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:01:18.000843 | orchestrator | 2025-05-13 20:01:18 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:01:18.004420 | orchestrator | 2025-05-13 20:01:18 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:01:18.005233 | orchestrator | 2025-05-13 20:01:18 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:01:18.005270 | orchestrator | 2025-05-13 20:01:18 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:01:21.050395 | orchestrator | 2025-05-13 20:01:21 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:01:21.054573 | orchestrator | 2025-05-13 20:01:21 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:01:21.054926 | orchestrator | 2025-05-13 20:01:21 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:01:21.055655 | orchestrator | 2025-05-13 20:01:21 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:01:21.059685 | orchestrator | 2025-05-13 20:01:21 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:01:21.059751 | orchestrator | 2025-05-13 20:01:21 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:01:24.090348 | orchestrator | 2025-05-13 20:01:24 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:01:24.090449 | orchestrator | 2025-05-13 20:01:24 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:01:24.091075 | orchestrator | 2025-05-13 20:01:24 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:01:24.091692 | orchestrator | 2025-05-13 20:01:24 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:01:24.092292 | orchestrator | 2025-05-13 20:01:24 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:01:24.092318 | orchestrator | 2025-05-13 20:01:24 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:01:27.125518 | orchestrator | 2025-05-13 20:01:27 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:01:27.125714 | orchestrator | 2025-05-13 20:01:27 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:01:27.126448 | orchestrator | 2025-05-13 20:01:27 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state STARTED 2025-05-13 20:01:27.127083 | orchestrator | 2025-05-13 20:01:27 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:01:27.128072 | orchestrator | 2025-05-13 20:01:27 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:01:27.128098 | orchestrator | 2025-05-13 20:01:27 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:01:30.168844 | orchestrator | 2025-05-13 20:01:30 | INFO  | Task fc91bea1-cafd-40eb-9eab-5d4426f51398 is in state STARTED 2025-05-13 20:01:30.168981 | orchestrator | 2025-05-13 20:01:30 | INFO  | Task e8650d26-c9d8-4d11-8b89-e83832ef89aa is in state STARTED 2025-05-13 20:01:30.169007 | orchestrator | 2025-05-13 20:01:30 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:01:30.169115 | orchestrator | 2025-05-13 20:01:30 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:01:30.170400 | orchestrator | 2025-05-13 20:01:30.170431 | orchestrator | 2025-05-13 20:01:30 | INFO  | Task 41b2f488-a587-45b9-958e-46fbd1638ca7 is in state SUCCESS 2025-05-13 20:01:30.172141 | orchestrator | 2025-05-13 20:01:30.172231 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-05-13 20:01:30.172246 | orchestrator | 2025-05-13 20:01:30.172259 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-05-13 20:01:30.172271 | orchestrator | Tuesday 13 May 2025 19:56:48 +0000 (0:00:00.183) 0:00:00.183 *********** 2025-05-13 20:01:30.172283 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:01:30.172294 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:01:30.172324 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:01:30.172335 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:01:30.172361 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:01:30.172394 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:01:30.172406 | orchestrator | 2025-05-13 20:01:30.172417 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-05-13 20:01:30.172429 | orchestrator | Tuesday 13 May 2025 19:56:49 +0000 (0:00:00.728) 0:00:00.912 *********** 2025-05-13 20:01:30.172440 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:01:30.172452 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:01:30.172463 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:01:30.172473 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.172484 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:01:30.172495 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:01:30.172505 | orchestrator | 2025-05-13 20:01:30.172516 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-05-13 20:01:30.172527 | orchestrator | Tuesday 13 May 2025 19:56:50 +0000 (0:00:00.735) 0:00:01.647 *********** 2025-05-13 20:01:30.172538 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:01:30.172548 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:01:30.172597 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:01:30.172616 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.172635 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:01:30.172654 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:01:30.172669 | orchestrator | 2025-05-13 20:01:30.172683 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-05-13 20:01:30.172696 | orchestrator | Tuesday 13 May 2025 19:56:50 +0000 (0:00:00.723) 0:00:02.371 *********** 2025-05-13 20:01:30.172708 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:01:30.172720 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:01:30.172732 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:01:30.172745 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:01:30.172757 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:01:30.172768 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:01:30.172778 | orchestrator | 2025-05-13 20:01:30.172789 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-05-13 20:01:30.172800 | orchestrator | Tuesday 13 May 2025 19:56:52 +0000 (0:00:01.897) 0:00:04.269 *********** 2025-05-13 20:01:30.172811 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:01:30.172822 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:01:30.172832 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:01:30.172843 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:01:30.172854 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:01:30.172865 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:01:30.172875 | orchestrator | 2025-05-13 20:01:30.172886 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-05-13 20:01:30.172897 | orchestrator | Tuesday 13 May 2025 19:56:54 +0000 (0:00:01.288) 0:00:05.557 *********** 2025-05-13 20:01:30.172908 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:01:30.172918 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:01:30.172929 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:01:30.172939 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:01:30.172950 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:01:30.172960 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:01:30.172971 | orchestrator | 2025-05-13 20:01:30.172982 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-05-13 20:01:30.172992 | orchestrator | Tuesday 13 May 2025 19:56:55 +0000 (0:00:01.063) 0:00:06.621 *********** 2025-05-13 20:01:30.173003 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:01:30.173014 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:01:30.173024 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:01:30.173035 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.173045 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:01:30.173056 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:01:30.173067 | orchestrator | 2025-05-13 20:01:30.173078 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-05-13 20:01:30.173098 | orchestrator | Tuesday 13 May 2025 19:56:56 +0000 (0:00:00.910) 0:00:07.532 *********** 2025-05-13 20:01:30.173109 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:01:30.173120 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:01:30.173130 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:01:30.173141 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.173152 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:01:30.173163 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:01:30.173173 | orchestrator | 2025-05-13 20:01:30.173184 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-05-13 20:01:30.173195 | orchestrator | Tuesday 13 May 2025 19:56:56 +0000 (0:00:00.671) 0:00:08.203 *********** 2025-05-13 20:01:30.173206 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-13 20:01:30.173217 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-13 20:01:30.173228 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:01:30.173239 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-13 20:01:30.173249 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-13 20:01:30.173260 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:01:30.173271 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-13 20:01:30.173282 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-13 20:01:30.173292 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:01:30.173303 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-13 20:01:30.173332 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-13 20:01:30.173344 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.173355 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-13 20:01:30.173367 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-13 20:01:30.173377 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:01:30.173388 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-13 20:01:30.173404 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-13 20:01:30.173415 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:01:30.173426 | orchestrator | 2025-05-13 20:01:30.173437 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-05-13 20:01:30.173447 | orchestrator | Tuesday 13 May 2025 19:56:57 +0000 (0:00:00.909) 0:00:09.113 *********** 2025-05-13 20:01:30.173458 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:01:30.173469 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:01:30.173479 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:01:30.173490 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.173501 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:01:30.173511 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:01:30.173522 | orchestrator | 2025-05-13 20:01:30.173533 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-05-13 20:01:30.173544 | orchestrator | Tuesday 13 May 2025 19:56:59 +0000 (0:00:01.326) 0:00:10.439 *********** 2025-05-13 20:01:30.173593 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:01:30.173605 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:01:30.173616 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:01:30.173627 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:01:30.173637 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:01:30.173648 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:01:30.173658 | orchestrator | 2025-05-13 20:01:30.173669 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-05-13 20:01:30.173679 | orchestrator | Tuesday 13 May 2025 19:56:59 +0000 (0:00:00.667) 0:00:11.107 *********** 2025-05-13 20:01:30.173698 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:01:30.173708 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:01:30.173719 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:01:30.173730 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:01:30.173740 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:01:30.173751 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:01:30.173761 | orchestrator | 2025-05-13 20:01:30.173772 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-05-13 20:01:30.173782 | orchestrator | Tuesday 13 May 2025 19:57:05 +0000 (0:00:05.682) 0:00:16.790 *********** 2025-05-13 20:01:30.173793 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:01:30.173803 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:01:30.173814 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:01:30.173825 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.173835 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:01:30.173845 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:01:30.173856 | orchestrator | 2025-05-13 20:01:30.173867 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-05-13 20:01:30.173877 | orchestrator | Tuesday 13 May 2025 19:57:06 +0000 (0:00:01.121) 0:00:17.911 *********** 2025-05-13 20:01:30.173888 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:01:30.173898 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:01:30.173909 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:01:30.173919 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.173930 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:01:30.173940 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:01:30.173950 | orchestrator | 2025-05-13 20:01:30.173962 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-05-13 20:01:30.173973 | orchestrator | Tuesday 13 May 2025 19:57:07 +0000 (0:00:01.492) 0:00:19.404 *********** 2025-05-13 20:01:30.173984 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:01:30.173995 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:01:30.174006 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:01:30.174066 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.174080 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:01:30.174092 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:01:30.174102 | orchestrator | 2025-05-13 20:01:30.174113 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-05-13 20:01:30.174124 | orchestrator | Tuesday 13 May 2025 19:57:08 +0000 (0:00:00.500) 0:00:19.905 *********** 2025-05-13 20:01:30.174136 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-05-13 20:01:30.174146 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-05-13 20:01:30.174158 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:01:30.174169 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-05-13 20:01:30.174180 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-05-13 20:01:30.174190 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:01:30.174201 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-05-13 20:01:30.174212 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-05-13 20:01:30.174223 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:01:30.174233 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-05-13 20:01:30.174244 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-05-13 20:01:30.174254 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.174265 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-05-13 20:01:30.174276 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-05-13 20:01:30.174286 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:01:30.174297 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-05-13 20:01:30.174308 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-05-13 20:01:30.174318 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:01:30.174336 | orchestrator | 2025-05-13 20:01:30.174347 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-05-13 20:01:30.174367 | orchestrator | Tuesday 13 May 2025 19:57:09 +0000 (0:00:00.726) 0:00:20.631 *********** 2025-05-13 20:01:30.174378 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:01:30.174389 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:01:30.174399 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:01:30.174410 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.174420 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:01:30.174431 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:01:30.174442 | orchestrator | 2025-05-13 20:01:30.174453 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-05-13 20:01:30.174464 | orchestrator | 2025-05-13 20:01:30.174475 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-05-13 20:01:30.174486 | orchestrator | Tuesday 13 May 2025 19:57:10 +0000 (0:00:01.097) 0:00:21.729 *********** 2025-05-13 20:01:30.174497 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:01:30.174508 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:01:30.174518 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:01:30.174529 | orchestrator | 2025-05-13 20:01:30.174540 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-05-13 20:01:30.174603 | orchestrator | Tuesday 13 May 2025 19:57:11 +0000 (0:00:01.246) 0:00:22.975 *********** 2025-05-13 20:01:30.174618 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:01:30.174629 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:01:30.174640 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:01:30.174650 | orchestrator | 2025-05-13 20:01:30.174661 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-05-13 20:01:30.174672 | orchestrator | Tuesday 13 May 2025 19:57:12 +0000 (0:00:01.155) 0:00:24.130 *********** 2025-05-13 20:01:30.174683 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:01:30.174694 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:01:30.174704 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:01:30.174715 | orchestrator | 2025-05-13 20:01:30.174726 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-05-13 20:01:30.174737 | orchestrator | Tuesday 13 May 2025 19:57:13 +0000 (0:00:01.141) 0:00:25.272 *********** 2025-05-13 20:01:30.174747 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:01:30.174758 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:01:30.174768 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:01:30.174779 | orchestrator | 2025-05-13 20:01:30.174790 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-05-13 20:01:30.174800 | orchestrator | Tuesday 13 May 2025 19:57:14 +0000 (0:00:00.715) 0:00:25.987 *********** 2025-05-13 20:01:30.174811 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.174822 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:01:30.174833 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:01:30.174843 | orchestrator | 2025-05-13 20:01:30.174854 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-05-13 20:01:30.174865 | orchestrator | Tuesday 13 May 2025 19:57:14 +0000 (0:00:00.408) 0:00:26.396 *********** 2025-05-13 20:01:30.174875 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:01:30.174886 | orchestrator | 2025-05-13 20:01:30.174906 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-05-13 20:01:30.174917 | orchestrator | Tuesday 13 May 2025 19:57:15 +0000 (0:00:00.754) 0:00:27.150 *********** 2025-05-13 20:01:30.174928 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:01:30.174939 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:01:30.174949 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:01:30.174960 | orchestrator | 2025-05-13 20:01:30.174971 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-05-13 20:01:30.174982 | orchestrator | Tuesday 13 May 2025 19:57:18 +0000 (0:00:02.649) 0:00:29.799 *********** 2025-05-13 20:01:30.175000 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:01:30.175011 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:01:30.175022 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:01:30.175032 | orchestrator | 2025-05-13 20:01:30.175043 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-05-13 20:01:30.175054 | orchestrator | Tuesday 13 May 2025 19:57:19 +0000 (0:00:00.680) 0:00:30.480 *********** 2025-05-13 20:01:30.175064 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:01:30.175075 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:01:30.175085 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:01:30.175097 | orchestrator | 2025-05-13 20:01:30.175107 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-05-13 20:01:30.175118 | orchestrator | Tuesday 13 May 2025 19:57:20 +0000 (0:00:00.961) 0:00:31.442 *********** 2025-05-13 20:01:30.175129 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:01:30.175140 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:01:30.175150 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:01:30.175161 | orchestrator | 2025-05-13 20:01:30.175172 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-05-13 20:01:30.175183 | orchestrator | Tuesday 13 May 2025 19:57:21 +0000 (0:00:01.794) 0:00:33.236 *********** 2025-05-13 20:01:30.175193 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.175204 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:01:30.175215 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:01:30.175226 | orchestrator | 2025-05-13 20:01:30.175236 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-05-13 20:01:30.175247 | orchestrator | Tuesday 13 May 2025 19:57:22 +0000 (0:00:00.348) 0:00:33.584 *********** 2025-05-13 20:01:30.175258 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.175268 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:01:30.175279 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:01:30.175290 | orchestrator | 2025-05-13 20:01:30.175300 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-05-13 20:01:30.175311 | orchestrator | Tuesday 13 May 2025 19:57:22 +0000 (0:00:00.476) 0:00:34.060 *********** 2025-05-13 20:01:30.175322 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:01:30.175333 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:01:30.175343 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:01:30.175354 | orchestrator | 2025-05-13 20:01:30.175365 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-05-13 20:01:30.175376 | orchestrator | Tuesday 13 May 2025 19:57:24 +0000 (0:00:02.279) 0:00:36.340 *********** 2025-05-13 20:01:30.175394 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-05-13 20:01:30.175407 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-05-13 20:01:30.175418 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-05-13 20:01:30.175434 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-05-13 20:01:30.175446 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-05-13 20:01:30.175457 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-05-13 20:01:30.175482 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-05-13 20:01:30.175505 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-05-13 20:01:30.175524 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-05-13 20:01:30.175535 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-05-13 20:01:30.175546 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-05-13 20:01:30.175588 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-05-13 20:01:30.175601 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-05-13 20:01:30.175612 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-05-13 20:01:30.175623 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-05-13 20:01:30.175633 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:01:30.175645 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:01:30.175656 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:01:30.175666 | orchestrator | 2025-05-13 20:01:30.175677 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-05-13 20:01:30.175688 | orchestrator | Tuesday 13 May 2025 19:58:20 +0000 (0:00:56.056) 0:01:32.397 *********** 2025-05-13 20:01:30.175699 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.175709 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:01:30.175720 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:01:30.175731 | orchestrator | 2025-05-13 20:01:30.175742 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-05-13 20:01:30.175752 | orchestrator | Tuesday 13 May 2025 19:58:21 +0000 (0:00:00.425) 0:01:32.823 *********** 2025-05-13 20:01:30.175763 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:01:30.175774 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:01:30.175785 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:01:30.175795 | orchestrator | 2025-05-13 20:01:30.175806 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-05-13 20:01:30.175818 | orchestrator | Tuesday 13 May 2025 19:58:22 +0000 (0:00:01.164) 0:01:33.987 *********** 2025-05-13 20:01:30.175829 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:01:30.175840 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:01:30.175851 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:01:30.175861 | orchestrator | 2025-05-13 20:01:30.175872 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-05-13 20:01:30.175883 | orchestrator | Tuesday 13 May 2025 19:58:24 +0000 (0:00:01.727) 0:01:35.715 *********** 2025-05-13 20:01:30.175894 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:01:30.175904 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:01:30.175915 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:01:30.175926 | orchestrator | 2025-05-13 20:01:30.175937 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-05-13 20:01:30.175948 | orchestrator | Tuesday 13 May 2025 19:58:38 +0000 (0:00:14.602) 0:01:50.317 *********** 2025-05-13 20:01:30.175959 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:01:30.175969 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:01:30.175980 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:01:30.175991 | orchestrator | 2025-05-13 20:01:30.176002 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-05-13 20:01:30.176012 | orchestrator | Tuesday 13 May 2025 19:58:40 +0000 (0:00:01.150) 0:01:51.468 *********** 2025-05-13 20:01:30.176023 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:01:30.176034 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:01:30.176045 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:01:30.176055 | orchestrator | 2025-05-13 20:01:30.176075 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-05-13 20:01:30.176086 | orchestrator | Tuesday 13 May 2025 19:58:40 +0000 (0:00:00.778) 0:01:52.247 *********** 2025-05-13 20:01:30.176097 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:01:30.176108 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:01:30.176119 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:01:30.176130 | orchestrator | 2025-05-13 20:01:30.176148 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-05-13 20:01:30.176159 | orchestrator | Tuesday 13 May 2025 19:58:41 +0000 (0:00:00.891) 0:01:53.138 *********** 2025-05-13 20:01:30.176170 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:01:30.176181 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:01:30.176192 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:01:30.176202 | orchestrator | 2025-05-13 20:01:30.176213 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-05-13 20:01:30.176229 | orchestrator | Tuesday 13 May 2025 19:58:43 +0000 (0:00:01.281) 0:01:54.421 *********** 2025-05-13 20:01:30.176241 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:01:30.176252 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:01:30.176262 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:01:30.176273 | orchestrator | 2025-05-13 20:01:30.176284 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-05-13 20:01:30.176295 | orchestrator | Tuesday 13 May 2025 19:58:43 +0000 (0:00:00.327) 0:01:54.748 *********** 2025-05-13 20:01:30.176306 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:01:30.176316 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:01:30.176327 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:01:30.176338 | orchestrator | 2025-05-13 20:01:30.176348 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-05-13 20:01:30.176359 | orchestrator | Tuesday 13 May 2025 19:58:44 +0000 (0:00:00.667) 0:01:55.416 *********** 2025-05-13 20:01:30.176370 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:01:30.176381 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:01:30.176392 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:01:30.176403 | orchestrator | 2025-05-13 20:01:30.176414 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-05-13 20:01:30.176424 | orchestrator | Tuesday 13 May 2025 19:58:44 +0000 (0:00:00.920) 0:01:56.336 *********** 2025-05-13 20:01:30.176435 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:01:30.176446 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:01:30.176457 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:01:30.176467 | orchestrator | 2025-05-13 20:01:30.176478 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-05-13 20:01:30.176489 | orchestrator | Tuesday 13 May 2025 19:58:46 +0000 (0:00:01.391) 0:01:57.727 *********** 2025-05-13 20:01:30.176500 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:01:30.176510 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:01:30.176521 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:01:30.176532 | orchestrator | 2025-05-13 20:01:30.176543 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-05-13 20:01:30.176573 | orchestrator | Tuesday 13 May 2025 19:58:47 +0000 (0:00:01.029) 0:01:58.757 *********** 2025-05-13 20:01:30.176584 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.176595 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:01:30.176606 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:01:30.176616 | orchestrator | 2025-05-13 20:01:30.176627 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-05-13 20:01:30.176638 | orchestrator | Tuesday 13 May 2025 19:58:47 +0000 (0:00:00.300) 0:01:59.058 *********** 2025-05-13 20:01:30.176649 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.176659 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:01:30.176670 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:01:30.176680 | orchestrator | 2025-05-13 20:01:30.176691 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-05-13 20:01:30.176709 | orchestrator | Tuesday 13 May 2025 19:58:47 +0000 (0:00:00.351) 0:01:59.409 *********** 2025-05-13 20:01:30.176720 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:01:30.176731 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:01:30.176742 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:01:30.176753 | orchestrator | 2025-05-13 20:01:30.176763 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-05-13 20:01:30.176774 | orchestrator | Tuesday 13 May 2025 19:58:49 +0000 (0:00:01.339) 0:02:00.749 *********** 2025-05-13 20:01:30.176785 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:01:30.176796 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:01:30.176806 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:01:30.176817 | orchestrator | 2025-05-13 20:01:30.176828 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-05-13 20:01:30.176839 | orchestrator | Tuesday 13 May 2025 19:58:50 +0000 (0:00:00.742) 0:02:01.491 *********** 2025-05-13 20:01:30.176850 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-05-13 20:01:30.176860 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-05-13 20:01:30.176871 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-05-13 20:01:30.176882 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-05-13 20:01:30.176894 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-05-13 20:01:30.176905 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-05-13 20:01:30.176916 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-05-13 20:01:30.176927 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-05-13 20:01:30.176938 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-05-13 20:01:30.176949 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-05-13 20:01:30.176960 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-05-13 20:01:30.176971 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-05-13 20:01:30.176988 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-05-13 20:01:30.176999 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-05-13 20:01:30.177010 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-05-13 20:01:30.177027 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-05-13 20:01:30.177038 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-05-13 20:01:30.177049 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-05-13 20:01:30.177059 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-05-13 20:01:30.177070 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-05-13 20:01:30.177081 | orchestrator | 2025-05-13 20:01:30.177092 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-05-13 20:01:30.177103 | orchestrator | 2025-05-13 20:01:30.177114 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-05-13 20:01:30.177125 | orchestrator | Tuesday 13 May 2025 19:58:53 +0000 (0:00:03.338) 0:02:04.830 *********** 2025-05-13 20:01:30.177136 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:01:30.177147 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:01:30.177170 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:01:30.177181 | orchestrator | 2025-05-13 20:01:30.177192 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-05-13 20:01:30.177203 | orchestrator | Tuesday 13 May 2025 19:58:54 +0000 (0:00:00.732) 0:02:05.562 *********** 2025-05-13 20:01:30.177213 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:01:30.177225 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:01:30.177235 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:01:30.177246 | orchestrator | 2025-05-13 20:01:30.177257 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-05-13 20:01:30.177267 | orchestrator | Tuesday 13 May 2025 19:58:54 +0000 (0:00:00.842) 0:02:06.405 *********** 2025-05-13 20:01:30.177278 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:01:30.177289 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:01:30.177300 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:01:30.177311 | orchestrator | 2025-05-13 20:01:30.177322 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-05-13 20:01:30.177333 | orchestrator | Tuesday 13 May 2025 19:58:55 +0000 (0:00:00.370) 0:02:06.775 *********** 2025-05-13 20:01:30.177344 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:01:30.177355 | orchestrator | 2025-05-13 20:01:30.177365 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-05-13 20:01:30.177376 | orchestrator | Tuesday 13 May 2025 19:58:56 +0000 (0:00:00.752) 0:02:07.528 *********** 2025-05-13 20:01:30.177387 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:01:30.177398 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:01:30.177408 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:01:30.177420 | orchestrator | 2025-05-13 20:01:30.177431 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-05-13 20:01:30.177441 | orchestrator | Tuesday 13 May 2025 19:58:56 +0000 (0:00:00.517) 0:02:08.046 *********** 2025-05-13 20:01:30.177452 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:01:30.177463 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:01:30.177474 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:01:30.177485 | orchestrator | 2025-05-13 20:01:30.177496 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-05-13 20:01:30.177506 | orchestrator | Tuesday 13 May 2025 19:58:57 +0000 (0:00:00.640) 0:02:08.686 *********** 2025-05-13 20:01:30.177517 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:01:30.177528 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:01:30.177539 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:01:30.177566 | orchestrator | 2025-05-13 20:01:30.177578 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-05-13 20:01:30.177589 | orchestrator | Tuesday 13 May 2025 19:58:57 +0000 (0:00:00.679) 0:02:09.366 *********** 2025-05-13 20:01:30.177600 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:01:30.177611 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:01:30.177622 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:01:30.177633 | orchestrator | 2025-05-13 20:01:30.177643 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-05-13 20:01:30.177654 | orchestrator | Tuesday 13 May 2025 19:59:00 +0000 (0:00:02.836) 0:02:12.203 *********** 2025-05-13 20:01:30.177665 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:01:30.177676 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:01:30.177687 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:01:30.177698 | orchestrator | 2025-05-13 20:01:30.177709 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-05-13 20:01:30.177719 | orchestrator | 2025-05-13 20:01:30.177730 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-05-13 20:01:30.177741 | orchestrator | Tuesday 13 May 2025 19:59:10 +0000 (0:00:09.816) 0:02:22.019 *********** 2025-05-13 20:01:30.177752 | orchestrator | ok: [testbed-manager] 2025-05-13 20:01:30.177763 | orchestrator | 2025-05-13 20:01:30.177781 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-05-13 20:01:30.177792 | orchestrator | Tuesday 13 May 2025 19:59:11 +0000 (0:00:00.788) 0:02:22.807 *********** 2025-05-13 20:01:30.177803 | orchestrator | changed: [testbed-manager] 2025-05-13 20:01:30.177814 | orchestrator | 2025-05-13 20:01:30.177825 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-05-13 20:01:30.177836 | orchestrator | Tuesday 13 May 2025 19:59:11 +0000 (0:00:00.444) 0:02:23.252 *********** 2025-05-13 20:01:30.177847 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-05-13 20:01:30.177858 | orchestrator | 2025-05-13 20:01:30.177875 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-05-13 20:01:30.177887 | orchestrator | Tuesday 13 May 2025 19:59:12 +0000 (0:00:00.951) 0:02:24.203 *********** 2025-05-13 20:01:30.177898 | orchestrator | changed: [testbed-manager] 2025-05-13 20:01:30.177909 | orchestrator | 2025-05-13 20:01:30.177920 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-05-13 20:01:30.177930 | orchestrator | Tuesday 13 May 2025 19:59:13 +0000 (0:00:00.801) 0:02:25.005 *********** 2025-05-13 20:01:30.177941 | orchestrator | changed: [testbed-manager] 2025-05-13 20:01:30.177952 | orchestrator | 2025-05-13 20:01:30.177968 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-05-13 20:01:30.177979 | orchestrator | Tuesday 13 May 2025 19:59:14 +0000 (0:00:00.575) 0:02:25.581 *********** 2025-05-13 20:01:30.177990 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-13 20:01:30.178001 | orchestrator | 2025-05-13 20:01:30.178012 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-05-13 20:01:30.178056 | orchestrator | Tuesday 13 May 2025 19:59:15 +0000 (0:00:01.663) 0:02:27.245 *********** 2025-05-13 20:01:30.178068 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-13 20:01:30.178079 | orchestrator | 2025-05-13 20:01:30.178090 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-05-13 20:01:30.178101 | orchestrator | Tuesday 13 May 2025 19:59:16 +0000 (0:00:00.829) 0:02:28.074 *********** 2025-05-13 20:01:30.178111 | orchestrator | changed: [testbed-manager] 2025-05-13 20:01:30.178122 | orchestrator | 2025-05-13 20:01:30.178133 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-05-13 20:01:30.178144 | orchestrator | Tuesday 13 May 2025 19:59:17 +0000 (0:00:00.391) 0:02:28.466 *********** 2025-05-13 20:01:30.178155 | orchestrator | changed: [testbed-manager] 2025-05-13 20:01:30.178166 | orchestrator | 2025-05-13 20:01:30.178177 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-05-13 20:01:30.178188 | orchestrator | 2025-05-13 20:01:30.178199 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-05-13 20:01:30.178210 | orchestrator | Tuesday 13 May 2025 19:59:17 +0000 (0:00:00.404) 0:02:28.870 *********** 2025-05-13 20:01:30.178222 | orchestrator | ok: [testbed-manager] 2025-05-13 20:01:30.178232 | orchestrator | 2025-05-13 20:01:30.178243 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-05-13 20:01:30.178254 | orchestrator | Tuesday 13 May 2025 19:59:17 +0000 (0:00:00.132) 0:02:29.003 *********** 2025-05-13 20:01:30.178265 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-05-13 20:01:30.178276 | orchestrator | 2025-05-13 20:01:30.178286 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-05-13 20:01:30.178297 | orchestrator | Tuesday 13 May 2025 19:59:18 +0000 (0:00:00.417) 0:02:29.420 *********** 2025-05-13 20:01:30.178308 | orchestrator | ok: [testbed-manager] 2025-05-13 20:01:30.178319 | orchestrator | 2025-05-13 20:01:30.178330 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-05-13 20:01:30.178341 | orchestrator | Tuesday 13 May 2025 19:59:18 +0000 (0:00:00.758) 0:02:30.179 *********** 2025-05-13 20:01:30.178351 | orchestrator | ok: [testbed-manager] 2025-05-13 20:01:30.178362 | orchestrator | 2025-05-13 20:01:30.178373 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-05-13 20:01:30.178390 | orchestrator | Tuesday 13 May 2025 19:59:20 +0000 (0:00:01.646) 0:02:31.825 *********** 2025-05-13 20:01:30.178401 | orchestrator | changed: [testbed-manager] 2025-05-13 20:01:30.178412 | orchestrator | 2025-05-13 20:01:30.178423 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-05-13 20:01:30.178435 | orchestrator | Tuesday 13 May 2025 19:59:21 +0000 (0:00:00.770) 0:02:32.595 *********** 2025-05-13 20:01:30.178445 | orchestrator | ok: [testbed-manager] 2025-05-13 20:01:30.178456 | orchestrator | 2025-05-13 20:01:30.178467 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-05-13 20:01:30.178478 | orchestrator | Tuesday 13 May 2025 19:59:21 +0000 (0:00:00.429) 0:02:33.025 *********** 2025-05-13 20:01:30.178489 | orchestrator | changed: [testbed-manager] 2025-05-13 20:01:30.178500 | orchestrator | 2025-05-13 20:01:30.178511 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-05-13 20:01:30.178522 | orchestrator | Tuesday 13 May 2025 19:59:27 +0000 (0:00:06.095) 0:02:39.120 *********** 2025-05-13 20:01:30.178533 | orchestrator | changed: [testbed-manager] 2025-05-13 20:01:30.178543 | orchestrator | 2025-05-13 20:01:30.178609 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-05-13 20:01:30.178623 | orchestrator | Tuesday 13 May 2025 19:59:40 +0000 (0:00:12.671) 0:02:51.791 *********** 2025-05-13 20:01:30.178635 | orchestrator | ok: [testbed-manager] 2025-05-13 20:01:30.178646 | orchestrator | 2025-05-13 20:01:30.178657 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-05-13 20:01:30.178669 | orchestrator | 2025-05-13 20:01:30.178680 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-05-13 20:01:30.178691 | orchestrator | Tuesday 13 May 2025 19:59:40 +0000 (0:00:00.476) 0:02:52.268 *********** 2025-05-13 20:01:30.178703 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:01:30.178714 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:01:30.178725 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:01:30.178736 | orchestrator | 2025-05-13 20:01:30.178748 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-05-13 20:01:30.178759 | orchestrator | Tuesday 13 May 2025 19:59:41 +0000 (0:00:00.482) 0:02:52.750 *********** 2025-05-13 20:01:30.178771 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.178782 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:01:30.178793 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:01:30.178805 | orchestrator | 2025-05-13 20:01:30.178817 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-05-13 20:01:30.178828 | orchestrator | Tuesday 13 May 2025 19:59:41 +0000 (0:00:00.405) 0:02:53.156 *********** 2025-05-13 20:01:30.178839 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:01:30.178851 | orchestrator | 2025-05-13 20:01:30.178862 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-05-13 20:01:30.178881 | orchestrator | Tuesday 13 May 2025 19:59:42 +0000 (0:00:00.580) 0:02:53.737 *********** 2025-05-13 20:01:30.178892 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-13 20:01:30.178902 | orchestrator | 2025-05-13 20:01:30.178912 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-05-13 20:01:30.178922 | orchestrator | Tuesday 13 May 2025 19:59:43 +0000 (0:00:00.894) 0:02:54.631 *********** 2025-05-13 20:01:30.178932 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 20:01:30.178942 | orchestrator | 2025-05-13 20:01:30.178957 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-05-13 20:01:30.178967 | orchestrator | Tuesday 13 May 2025 19:59:44 +0000 (0:00:00.925) 0:02:55.557 *********** 2025-05-13 20:01:30.178977 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.178987 | orchestrator | 2025-05-13 20:01:30.178997 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-05-13 20:01:30.179007 | orchestrator | Tuesday 13 May 2025 19:59:44 +0000 (0:00:00.588) 0:02:56.146 *********** 2025-05-13 20:01:30.179025 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 20:01:30.179036 | orchestrator | 2025-05-13 20:01:30.179045 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-05-13 20:01:30.179056 | orchestrator | Tuesday 13 May 2025 19:59:45 +0000 (0:00:00.958) 0:02:57.105 *********** 2025-05-13 20:01:30.179067 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.179077 | orchestrator | 2025-05-13 20:01:30.179087 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-05-13 20:01:30.179097 | orchestrator | Tuesday 13 May 2025 19:59:45 +0000 (0:00:00.139) 0:02:57.244 *********** 2025-05-13 20:01:30.179107 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.179117 | orchestrator | 2025-05-13 20:01:30.179127 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-05-13 20:01:30.179137 | orchestrator | Tuesday 13 May 2025 19:59:45 +0000 (0:00:00.161) 0:02:57.406 *********** 2025-05-13 20:01:30.179146 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.179157 | orchestrator | 2025-05-13 20:01:30.179167 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-05-13 20:01:30.179176 | orchestrator | Tuesday 13 May 2025 19:59:46 +0000 (0:00:00.210) 0:02:57.616 *********** 2025-05-13 20:01:30.179186 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.179196 | orchestrator | 2025-05-13 20:01:30.179206 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-05-13 20:01:30.179216 | orchestrator | Tuesday 13 May 2025 19:59:46 +0000 (0:00:00.183) 0:02:57.799 *********** 2025-05-13 20:01:30.179226 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-13 20:01:30.179235 | orchestrator | 2025-05-13 20:01:30.179246 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-05-13 20:01:30.179256 | orchestrator | Tuesday 13 May 2025 19:59:50 +0000 (0:00:04.607) 0:03:02.407 *********** 2025-05-13 20:01:30.179266 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-05-13 20:01:30.179276 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-05-13 20:01:30.179286 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-05-13 20:01:30.179297 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-05-13 20:01:30.179307 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-05-13 20:01:30.179317 | orchestrator | 2025-05-13 20:01:30.179327 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-05-13 20:01:30.179337 | orchestrator | Tuesday 13 May 2025 20:01:01 +0000 (0:01:10.642) 0:04:13.049 *********** 2025-05-13 20:01:30.179348 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 20:01:30.179358 | orchestrator | 2025-05-13 20:01:30.179368 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-05-13 20:01:30.179378 | orchestrator | Tuesday 13 May 2025 20:01:02 +0000 (0:00:01.180) 0:04:14.230 *********** 2025-05-13 20:01:30.179389 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-13 20:01:30.179399 | orchestrator | 2025-05-13 20:01:30.179409 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-05-13 20:01:30.179420 | orchestrator | Tuesday 13 May 2025 20:01:04 +0000 (0:00:01.794) 0:04:16.025 *********** 2025-05-13 20:01:30.179430 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-13 20:01:30.179440 | orchestrator | 2025-05-13 20:01:30.179450 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-05-13 20:01:30.179461 | orchestrator | Tuesday 13 May 2025 20:01:05 +0000 (0:00:01.208) 0:04:17.234 *********** 2025-05-13 20:01:30.179470 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.179480 | orchestrator | 2025-05-13 20:01:30.179491 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-05-13 20:01:30.179501 | orchestrator | Tuesday 13 May 2025 20:01:05 +0000 (0:00:00.167) 0:04:17.401 *********** 2025-05-13 20:01:30.179521 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-05-13 20:01:30.179532 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-05-13 20:01:30.179543 | orchestrator | 2025-05-13 20:01:30.179572 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-05-13 20:01:30.179582 | orchestrator | Tuesday 13 May 2025 20:01:08 +0000 (0:00:02.031) 0:04:19.433 *********** 2025-05-13 20:01:30.179593 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.179603 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:01:30.179613 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:01:30.179622 | orchestrator | 2025-05-13 20:01:30.179633 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-05-13 20:01:30.179644 | orchestrator | Tuesday 13 May 2025 20:01:08 +0000 (0:00:00.230) 0:04:19.663 *********** 2025-05-13 20:01:30.179654 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:01:30.179664 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:01:30.179674 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:01:30.179684 | orchestrator | 2025-05-13 20:01:30.179701 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-05-13 20:01:30.179711 | orchestrator | 2025-05-13 20:01:30.179722 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-05-13 20:01:30.179731 | orchestrator | Tuesday 13 May 2025 20:01:09 +0000 (0:00:00.828) 0:04:20.492 *********** 2025-05-13 20:01:30.179741 | orchestrator | ok: [testbed-manager] 2025-05-13 20:01:30.179751 | orchestrator | 2025-05-13 20:01:30.179761 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-05-13 20:01:30.179776 | orchestrator | Tuesday 13 May 2025 20:01:09 +0000 (0:00:00.121) 0:04:20.613 *********** 2025-05-13 20:01:30.179786 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-05-13 20:01:30.179796 | orchestrator | 2025-05-13 20:01:30.179807 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-05-13 20:01:30.179817 | orchestrator | Tuesday 13 May 2025 20:01:09 +0000 (0:00:00.326) 0:04:20.940 *********** 2025-05-13 20:01:30.179826 | orchestrator | changed: [testbed-manager] 2025-05-13 20:01:30.179837 | orchestrator | 2025-05-13 20:01:30.179847 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-05-13 20:01:30.179857 | orchestrator | 2025-05-13 20:01:30.179867 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-05-13 20:01:30.179877 | orchestrator | Tuesday 13 May 2025 20:01:15 +0000 (0:00:05.743) 0:04:26.683 *********** 2025-05-13 20:01:30.179888 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:01:30.179898 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:01:30.179908 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:01:30.179918 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:01:30.179929 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:01:30.179939 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:01:30.179949 | orchestrator | 2025-05-13 20:01:30.179959 | orchestrator | TASK [Manage labels] *********************************************************** 2025-05-13 20:01:30.179969 | orchestrator | Tuesday 13 May 2025 20:01:15 +0000 (0:00:00.548) 0:04:27.231 *********** 2025-05-13 20:01:30.179979 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-05-13 20:01:30.179990 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-05-13 20:01:30.179999 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-05-13 20:01:30.180009 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-05-13 20:01:30.180019 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-05-13 20:01:30.180030 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-05-13 20:01:30.180039 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-05-13 20:01:30.180057 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-05-13 20:01:30.180067 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-05-13 20:01:30.180077 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-05-13 20:01:30.180087 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-05-13 20:01:30.180096 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-05-13 20:01:30.180106 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-05-13 20:01:30.180116 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-05-13 20:01:30.180125 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-05-13 20:01:30.180135 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-05-13 20:01:30.180145 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-05-13 20:01:30.180155 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-05-13 20:01:30.180165 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-05-13 20:01:30.180175 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-05-13 20:01:30.180185 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-05-13 20:01:30.180195 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-05-13 20:01:30.180206 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-05-13 20:01:30.180216 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-05-13 20:01:30.180226 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-05-13 20:01:30.180236 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-05-13 20:01:30.180246 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-05-13 20:01:30.180256 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-05-13 20:01:30.180266 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-05-13 20:01:30.180276 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-05-13 20:01:30.180286 | orchestrator | 2025-05-13 20:01:30.180303 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-05-13 20:01:30.180314 | orchestrator | Tuesday 13 May 2025 20:01:27 +0000 (0:00:11.694) 0:04:38.926 *********** 2025-05-13 20:01:30.180324 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:01:30.180334 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:01:30.180344 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:01:30.180354 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.180364 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:01:30.180374 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:01:30.180384 | orchestrator | 2025-05-13 20:01:30.180399 | orchestrator | TASK [Manage taints] *********************************************************** 2025-05-13 20:01:30.180410 | orchestrator | Tuesday 13 May 2025 20:01:27 +0000 (0:00:00.443) 0:04:39.370 *********** 2025-05-13 20:01:30.180420 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:01:30.180430 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:01:30.180440 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:01:30.180450 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:01:30.180460 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:01:30.180470 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:01:30.180486 | orchestrator | 2025-05-13 20:01:30.180497 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:01:30.180507 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:01:30.180519 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-13 20:01:30.180530 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-05-13 20:01:30.180540 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-05-13 20:01:30.180569 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-13 20:01:30.180580 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-13 20:01:30.180590 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-13 20:01:30.180600 | orchestrator | 2025-05-13 20:01:30.180612 | orchestrator | 2025-05-13 20:01:30.180624 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:01:30.180634 | orchestrator | Tuesday 13 May 2025 20:01:28 +0000 (0:00:00.509) 0:04:39.879 *********** 2025-05-13 20:01:30.180645 | orchestrator | =============================================================================== 2025-05-13 20:01:30.180655 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 70.64s 2025-05-13 20:01:30.180666 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 56.06s 2025-05-13 20:01:30.180676 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 14.60s 2025-05-13 20:01:30.180687 | orchestrator | kubectl : Install required packages ------------------------------------ 12.67s 2025-05-13 20:01:30.180697 | orchestrator | Manage labels ---------------------------------------------------------- 11.69s 2025-05-13 20:01:30.180708 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.82s 2025-05-13 20:01:30.180718 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.10s 2025-05-13 20:01:30.180728 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.74s 2025-05-13 20:01:30.180739 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.68s 2025-05-13 20:01:30.180749 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.61s 2025-05-13 20:01:30.180760 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.34s 2025-05-13 20:01:30.180770 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 2.84s 2025-05-13 20:01:30.180781 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.65s 2025-05-13 20:01:30.180791 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.28s 2025-05-13 20:01:30.180802 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.03s 2025-05-13 20:01:30.180813 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.90s 2025-05-13 20:01:30.180823 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.79s 2025-05-13 20:01:30.180833 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.79s 2025-05-13 20:01:30.180844 | orchestrator | k3s_server : Copy K3s service file -------------------------------------- 1.73s 2025-05-13 20:01:30.180854 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.66s 2025-05-13 20:01:30.180871 | orchestrator | 2025-05-13 20:01:30 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:01:30.180883 | orchestrator | 2025-05-13 20:01:30 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:01:30.180900 | orchestrator | 2025-05-13 20:01:30 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:01:33.209431 | orchestrator | 2025-05-13 20:01:33 | INFO  | Task fc91bea1-cafd-40eb-9eab-5d4426f51398 is in state STARTED 2025-05-13 20:01:33.209875 | orchestrator | 2025-05-13 20:01:33 | INFO  | Task e8650d26-c9d8-4d11-8b89-e83832ef89aa is in state STARTED 2025-05-13 20:01:33.212022 | orchestrator | 2025-05-13 20:01:33 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:01:33.216746 | orchestrator | 2025-05-13 20:01:33 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:01:33.218233 | orchestrator | 2025-05-13 20:01:33 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:01:33.219306 | orchestrator | 2025-05-13 20:01:33 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:01:33.219334 | orchestrator | 2025-05-13 20:01:33 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:01:36.304807 | orchestrator | 2025-05-13 20:01:36 | INFO  | Task fc91bea1-cafd-40eb-9eab-5d4426f51398 is in state STARTED 2025-05-13 20:01:36.308000 | orchestrator | 2025-05-13 20:01:36 | INFO  | Task e8650d26-c9d8-4d11-8b89-e83832ef89aa is in state STARTED 2025-05-13 20:01:36.311230 | orchestrator | 2025-05-13 20:01:36 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:01:36.311285 | orchestrator | 2025-05-13 20:01:36 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:01:36.312327 | orchestrator | 2025-05-13 20:01:36 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:01:36.321455 | orchestrator | 2025-05-13 20:01:36 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:01:36.321508 | orchestrator | 2025-05-13 20:01:36 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:01:39.378760 | orchestrator | 2025-05-13 20:01:39 | INFO  | Task fc91bea1-cafd-40eb-9eab-5d4426f51398 is in state STARTED 2025-05-13 20:01:39.378889 | orchestrator | 2025-05-13 20:01:39 | INFO  | Task e8650d26-c9d8-4d11-8b89-e83832ef89aa is in state STARTED 2025-05-13 20:01:39.384254 | orchestrator | 2025-05-13 20:01:39 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:01:39.384303 | orchestrator | 2025-05-13 20:01:39 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:01:39.384961 | orchestrator | 2025-05-13 20:01:39 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:01:39.388958 | orchestrator | 2025-05-13 20:01:39 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:01:39.389129 | orchestrator | 2025-05-13 20:01:39 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:01:42.455021 | orchestrator | 2025-05-13 20:01:42 | INFO  | Task fc91bea1-cafd-40eb-9eab-5d4426f51398 is in state STARTED 2025-05-13 20:01:42.457928 | orchestrator | 2025-05-13 20:01:42 | INFO  | Task e8650d26-c9d8-4d11-8b89-e83832ef89aa is in state STARTED 2025-05-13 20:01:42.458790 | orchestrator | 2025-05-13 20:01:42 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:01:42.460012 | orchestrator | 2025-05-13 20:01:42 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:01:42.461064 | orchestrator | 2025-05-13 20:01:42 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:01:42.461998 | orchestrator | 2025-05-13 20:01:42 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:01:42.462070 | orchestrator | 2025-05-13 20:01:42 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:01:45.500818 | orchestrator | 2025-05-13 20:01:45 | INFO  | Task fc91bea1-cafd-40eb-9eab-5d4426f51398 is in state SUCCESS 2025-05-13 20:01:45.500942 | orchestrator | 2025-05-13 20:01:45 | INFO  | Task e8650d26-c9d8-4d11-8b89-e83832ef89aa is in state SUCCESS 2025-05-13 20:01:45.501288 | orchestrator | 2025-05-13 20:01:45 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:01:45.502145 | orchestrator | 2025-05-13 20:01:45 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:01:45.502813 | orchestrator | 2025-05-13 20:01:45 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:01:45.503735 | orchestrator | 2025-05-13 20:01:45 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:01:45.503764 | orchestrator | 2025-05-13 20:01:45 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:01:48.556485 | orchestrator | 2025-05-13 20:01:48 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:01:48.557737 | orchestrator | 2025-05-13 20:01:48 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:01:48.559195 | orchestrator | 2025-05-13 20:01:48 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:01:48.560654 | orchestrator | 2025-05-13 20:01:48 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:01:48.560693 | orchestrator | 2025-05-13 20:01:48 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:01:51.615465 | orchestrator | 2025-05-13 20:01:51 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:01:51.616633 | orchestrator | 2025-05-13 20:01:51 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:01:51.620120 | orchestrator | 2025-05-13 20:01:51 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:01:51.622527 | orchestrator | 2025-05-13 20:01:51 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:01:51.623073 | orchestrator | 2025-05-13 20:01:51 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:01:54.691880 | orchestrator | 2025-05-13 20:01:54 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:01:54.693186 | orchestrator | 2025-05-13 20:01:54 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:01:54.694932 | orchestrator | 2025-05-13 20:01:54 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:01:54.696077 | orchestrator | 2025-05-13 20:01:54 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:01:54.696097 | orchestrator | 2025-05-13 20:01:54 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:01:57.743428 | orchestrator | 2025-05-13 20:01:57 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:01:57.745238 | orchestrator | 2025-05-13 20:01:57 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:01:57.746910 | orchestrator | 2025-05-13 20:01:57 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:01:57.748321 | orchestrator | 2025-05-13 20:01:57 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:01:57.748726 | orchestrator | 2025-05-13 20:01:57 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:02:00.795153 | orchestrator | 2025-05-13 20:02:00 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:02:00.795420 | orchestrator | 2025-05-13 20:02:00 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:02:00.798109 | orchestrator | 2025-05-13 20:02:00 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:02:00.799064 | orchestrator | 2025-05-13 20:02:00 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:02:00.799130 | orchestrator | 2025-05-13 20:02:00 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:02:03.851800 | orchestrator | 2025-05-13 20:02:03 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:02:03.853850 | orchestrator | 2025-05-13 20:02:03 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:02:03.856373 | orchestrator | 2025-05-13 20:02:03 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:02:03.858061 | orchestrator | 2025-05-13 20:02:03 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:02:03.858104 | orchestrator | 2025-05-13 20:02:03 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:02:06.914564 | orchestrator | 2025-05-13 20:02:06 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:02:06.916012 | orchestrator | 2025-05-13 20:02:06 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:02:06.918217 | orchestrator | 2025-05-13 20:02:06 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:02:06.920663 | orchestrator | 2025-05-13 20:02:06 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:02:06.921195 | orchestrator | 2025-05-13 20:02:06 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:02:09.981827 | orchestrator | 2025-05-13 20:02:09 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:02:09.983865 | orchestrator | 2025-05-13 20:02:09 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:02:09.986012 | orchestrator | 2025-05-13 20:02:09 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:02:09.989937 | orchestrator | 2025-05-13 20:02:09 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:02:09.989981 | orchestrator | 2025-05-13 20:02:09 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:02:13.033672 | orchestrator | 2025-05-13 20:02:13 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:02:13.035773 | orchestrator | 2025-05-13 20:02:13 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:02:13.037644 | orchestrator | 2025-05-13 20:02:13 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:02:13.040150 | orchestrator | 2025-05-13 20:02:13 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:02:13.040207 | orchestrator | 2025-05-13 20:02:13 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:02:16.087299 | orchestrator | 2025-05-13 20:02:16 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:02:16.088307 | orchestrator | 2025-05-13 20:02:16 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:02:16.088486 | orchestrator | 2025-05-13 20:02:16 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:02:16.091051 | orchestrator | 2025-05-13 20:02:16 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:02:16.091086 | orchestrator | 2025-05-13 20:02:16 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:02:19.124080 | orchestrator | 2025-05-13 20:02:19 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:02:19.125666 | orchestrator | 2025-05-13 20:02:19 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:02:19.127430 | orchestrator | 2025-05-13 20:02:19 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:02:19.129021 | orchestrator | 2025-05-13 20:02:19 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state STARTED 2025-05-13 20:02:19.129451 | orchestrator | 2025-05-13 20:02:19 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:02:22.172323 | orchestrator | 2025-05-13 20:02:22 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:02:22.172915 | orchestrator | 2025-05-13 20:02:22 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:02:22.174154 | orchestrator | 2025-05-13 20:02:22 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:02:22.175375 | orchestrator | 2025-05-13 20:02:22 | INFO  | Task 00a272a3-06e8-4f7f-b8ab-c224bf87fa77 is in state SUCCESS 2025-05-13 20:02:22.175874 | orchestrator | 2025-05-13 20:02:22.175883 | orchestrator | 2025-05-13 20:02:22.175888 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-05-13 20:02:22.175893 | orchestrator | 2025-05-13 20:02:22.175898 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-05-13 20:02:22.175902 | orchestrator | Tuesday 13 May 2025 20:01:33 +0000 (0:00:00.147) 0:00:00.147 *********** 2025-05-13 20:02:22.175907 | orchestrator | ok: [testbed-manager] 2025-05-13 20:02:22.175912 | orchestrator | 2025-05-13 20:02:22.175916 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-05-13 20:02:22.175920 | orchestrator | Tuesday 13 May 2025 20:01:33 +0000 (0:00:00.626) 0:00:00.773 *********** 2025-05-13 20:02:22.175925 | orchestrator | ok: [testbed-manager] 2025-05-13 20:02:22.175929 | orchestrator | 2025-05-13 20:02:22.175933 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-05-13 20:02:22.175937 | orchestrator | Tuesday 13 May 2025 20:01:34 +0000 (0:00:00.665) 0:00:01.439 *********** 2025-05-13 20:02:22.175942 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-05-13 20:02:22.175947 | orchestrator | 2025-05-13 20:02:22.175950 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-05-13 20:02:22.175955 | orchestrator | Tuesday 13 May 2025 20:01:35 +0000 (0:00:00.716) 0:00:02.156 *********** 2025-05-13 20:02:22.175959 | orchestrator | changed: [testbed-manager] 2025-05-13 20:02:22.175963 | orchestrator | 2025-05-13 20:02:22.175967 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-05-13 20:02:22.175971 | orchestrator | Tuesday 13 May 2025 20:01:36 +0000 (0:00:01.199) 0:00:03.355 *********** 2025-05-13 20:02:22.175975 | orchestrator | changed: [testbed-manager] 2025-05-13 20:02:22.175979 | orchestrator | 2025-05-13 20:02:22.175983 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-05-13 20:02:22.175987 | orchestrator | Tuesday 13 May 2025 20:01:37 +0000 (0:00:01.148) 0:00:04.504 *********** 2025-05-13 20:02:22.175991 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-13 20:02:22.175996 | orchestrator | 2025-05-13 20:02:22.175999 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-05-13 20:02:22.176003 | orchestrator | Tuesday 13 May 2025 20:01:39 +0000 (0:00:02.584) 0:00:07.088 *********** 2025-05-13 20:02:22.176027 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-13 20:02:22.176031 | orchestrator | 2025-05-13 20:02:22.176035 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-05-13 20:02:22.176049 | orchestrator | Tuesday 13 May 2025 20:01:41 +0000 (0:00:01.193) 0:00:08.282 *********** 2025-05-13 20:02:22.176053 | orchestrator | ok: [testbed-manager] 2025-05-13 20:02:22.176057 | orchestrator | 2025-05-13 20:02:22.176061 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-05-13 20:02:22.176065 | orchestrator | Tuesday 13 May 2025 20:01:41 +0000 (0:00:00.532) 0:00:08.814 *********** 2025-05-13 20:02:22.176069 | orchestrator | ok: [testbed-manager] 2025-05-13 20:02:22.176073 | orchestrator | 2025-05-13 20:02:22.176077 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:02:22.176081 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:02:22.176086 | orchestrator | 2025-05-13 20:02:22.176090 | orchestrator | 2025-05-13 20:02:22.176094 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:02:22.176098 | orchestrator | Tuesday 13 May 2025 20:01:42 +0000 (0:00:00.359) 0:00:09.174 *********** 2025-05-13 20:02:22.176102 | orchestrator | =============================================================================== 2025-05-13 20:02:22.176106 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.58s 2025-05-13 20:02:22.176110 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.20s 2025-05-13 20:02:22.176114 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.19s 2025-05-13 20:02:22.176118 | orchestrator | Change server address in the kubeconfig --------------------------------- 1.15s 2025-05-13 20:02:22.176122 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.72s 2025-05-13 20:02:22.176126 | orchestrator | Create .kube directory -------------------------------------------------- 0.67s 2025-05-13 20:02:22.176130 | orchestrator | Get home directory of operator user ------------------------------------- 0.63s 2025-05-13 20:02:22.176134 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.53s 2025-05-13 20:02:22.176138 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.36s 2025-05-13 20:02:22.176170 | orchestrator | 2025-05-13 20:02:22.176174 | orchestrator | 2025-05-13 20:02:22.176178 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-05-13 20:02:22.176182 | orchestrator | 2025-05-13 20:02:22.176185 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-05-13 20:02:22.176189 | orchestrator | Tuesday 13 May 2025 20:01:35 +0000 (0:00:01.870) 0:00:01.870 *********** 2025-05-13 20:02:22.176193 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-05-13 20:02:22.176196 | orchestrator | 2025-05-13 20:02:22.176200 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-05-13 20:02:22.176204 | orchestrator | Tuesday 13 May 2025 20:01:37 +0000 (0:00:02.975) 0:00:04.846 *********** 2025-05-13 20:02:22.176208 | orchestrator | changed: [testbed-manager] 2025-05-13 20:02:22.176211 | orchestrator | 2025-05-13 20:02:22.176215 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-05-13 20:02:22.176219 | orchestrator | Tuesday 13 May 2025 20:01:41 +0000 (0:00:03.021) 0:00:07.867 *********** 2025-05-13 20:02:22.176223 | orchestrator | changed: [testbed-manager] 2025-05-13 20:02:22.176226 | orchestrator | 2025-05-13 20:02:22.176230 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:02:22.176234 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:02:22.176238 | orchestrator | 2025-05-13 20:02:22.176242 | orchestrator | 2025-05-13 20:02:22.176250 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:02:22.176254 | orchestrator | Tuesday 13 May 2025 20:01:43 +0000 (0:00:02.213) 0:00:10.080 *********** 2025-05-13 20:02:22.176262 | orchestrator | =============================================================================== 2025-05-13 20:02:22.176265 | orchestrator | Write kubeconfig file --------------------------------------------------- 3.02s 2025-05-13 20:02:22.176269 | orchestrator | Get kubeconfig file ----------------------------------------------------- 2.98s 2025-05-13 20:02:22.176287 | orchestrator | Change server address in the kubeconfig file ---------------------------- 2.21s 2025-05-13 20:02:22.176291 | orchestrator | 2025-05-13 20:02:22.177156 | orchestrator | 2025-05-13 20:02:22.177178 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-05-13 20:02:22.177183 | orchestrator | 2025-05-13 20:02:22.177188 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-13 20:02:22.177193 | orchestrator | Tuesday 13 May 2025 20:00:03 +0000 (0:00:00.223) 0:00:00.223 *********** 2025-05-13 20:02:22.177198 | orchestrator | ok: [localhost] => { 2025-05-13 20:02:22.177203 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-05-13 20:02:22.177208 | orchestrator | } 2025-05-13 20:02:22.177213 | orchestrator | 2025-05-13 20:02:22.177217 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-05-13 20:02:22.177222 | orchestrator | Tuesday 13 May 2025 20:00:03 +0000 (0:00:00.076) 0:00:00.299 *********** 2025-05-13 20:02:22.177227 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-05-13 20:02:22.177233 | orchestrator | ...ignoring 2025-05-13 20:02:22.177238 | orchestrator | 2025-05-13 20:02:22.177243 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-05-13 20:02:22.177247 | orchestrator | Tuesday 13 May 2025 20:00:07 +0000 (0:00:04.524) 0:00:04.823 *********** 2025-05-13 20:02:22.177251 | orchestrator | skipping: [localhost] 2025-05-13 20:02:22.177256 | orchestrator | 2025-05-13 20:02:22.177260 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-05-13 20:02:22.177265 | orchestrator | Tuesday 13 May 2025 20:00:07 +0000 (0:00:00.049) 0:00:04.873 *********** 2025-05-13 20:02:22.177269 | orchestrator | ok: [localhost] 2025-05-13 20:02:22.177273 | orchestrator | 2025-05-13 20:02:22.177282 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:02:22.177286 | orchestrator | 2025-05-13 20:02:22.177290 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:02:22.177295 | orchestrator | Tuesday 13 May 2025 20:00:07 +0000 (0:00:00.127) 0:00:05.000 *********** 2025-05-13 20:02:22.177299 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:02:22.177304 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:02:22.177308 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:02:22.177312 | orchestrator | 2025-05-13 20:02:22.177317 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:02:22.177321 | orchestrator | Tuesday 13 May 2025 20:00:08 +0000 (0:00:00.265) 0:00:05.266 *********** 2025-05-13 20:02:22.177325 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-05-13 20:02:22.177330 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-05-13 20:02:22.177334 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-05-13 20:02:22.177337 | orchestrator | 2025-05-13 20:02:22.177341 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-05-13 20:02:22.177345 | orchestrator | 2025-05-13 20:02:22.177349 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-13 20:02:22.177352 | orchestrator | Tuesday 13 May 2025 20:00:09 +0000 (0:00:01.035) 0:00:06.301 *********** 2025-05-13 20:02:22.177356 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:02:22.177360 | orchestrator | 2025-05-13 20:02:22.177364 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-13 20:02:22.177373 | orchestrator | Tuesday 13 May 2025 20:00:09 +0000 (0:00:00.557) 0:00:06.859 *********** 2025-05-13 20:02:22.177377 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:02:22.177381 | orchestrator | 2025-05-13 20:02:22.177385 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-05-13 20:02:22.177389 | orchestrator | Tuesday 13 May 2025 20:00:10 +0000 (0:00:00.991) 0:00:07.850 *********** 2025-05-13 20:02:22.177392 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:02:22.177396 | orchestrator | 2025-05-13 20:02:22.177400 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-05-13 20:02:22.177404 | orchestrator | Tuesday 13 May 2025 20:00:11 +0000 (0:00:00.404) 0:00:08.255 *********** 2025-05-13 20:02:22.177408 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:02:22.177411 | orchestrator | 2025-05-13 20:02:22.177415 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-05-13 20:02:22.177419 | orchestrator | Tuesday 13 May 2025 20:00:11 +0000 (0:00:00.354) 0:00:08.609 *********** 2025-05-13 20:02:22.177423 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:02:22.177426 | orchestrator | 2025-05-13 20:02:22.177430 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-05-13 20:02:22.177434 | orchestrator | Tuesday 13 May 2025 20:00:11 +0000 (0:00:00.353) 0:00:08.963 *********** 2025-05-13 20:02:22.177438 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:02:22.177441 | orchestrator | 2025-05-13 20:02:22.177445 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-13 20:02:22.177449 | orchestrator | Tuesday 13 May 2025 20:00:12 +0000 (0:00:00.583) 0:00:09.546 *********** 2025-05-13 20:02:22.177453 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:02:22.177457 | orchestrator | 2025-05-13 20:02:22.177461 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-13 20:02:22.177464 | orchestrator | Tuesday 13 May 2025 20:00:13 +0000 (0:00:00.585) 0:00:10.132 *********** 2025-05-13 20:02:22.177468 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:02:22.177472 | orchestrator | 2025-05-13 20:02:22.177476 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-05-13 20:02:22.177479 | orchestrator | Tuesday 13 May 2025 20:00:13 +0000 (0:00:00.856) 0:00:10.988 *********** 2025-05-13 20:02:22.177483 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:02:22.177487 | orchestrator | 2025-05-13 20:02:22.177491 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-05-13 20:02:22.177494 | orchestrator | Tuesday 13 May 2025 20:00:14 +0000 (0:00:00.413) 0:00:11.402 *********** 2025-05-13 20:02:22.177498 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:02:22.177502 | orchestrator | 2025-05-13 20:02:22.177513 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-05-13 20:02:22.177517 | orchestrator | Tuesday 13 May 2025 20:00:14 +0000 (0:00:00.347) 0:00:11.749 *********** 2025-05-13 20:02:22.177524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-13 20:02:22.177545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-13 20:02:22.177935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-13 20:02:22.177947 | orchestrator | 2025-05-13 20:02:22.177951 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-05-13 20:02:22.177955 | orchestrator | Tuesday 13 May 2025 20:00:16 +0000 (0:00:01.803) 0:00:13.552 *********** 2025-05-13 20:02:22.177966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-13 20:02:22.177971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-13 20:02:22.177983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-13 20:02:22.177987 | orchestrator | 2025-05-13 20:02:22.177991 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-05-13 20:02:22.177995 | orchestrator | Tuesday 13 May 2025 20:00:19 +0000 (0:00:03.078) 0:00:16.631 *********** 2025-05-13 20:02:22.177999 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-13 20:02:22.178004 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-13 20:02:22.178008 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-13 20:02:22.178050 | orchestrator | 2025-05-13 20:02:22.178056 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-05-13 20:02:22.178059 | orchestrator | Tuesday 13 May 2025 20:00:22 +0000 (0:00:02.456) 0:00:19.088 *********** 2025-05-13 20:02:22.178063 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-13 20:02:22.178067 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-13 20:02:22.178071 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-13 20:02:22.178075 | orchestrator | 2025-05-13 20:02:22.178078 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-05-13 20:02:22.178082 | orchestrator | Tuesday 13 May 2025 20:00:24 +0000 (0:00:02.386) 0:00:21.474 *********** 2025-05-13 20:02:22.178086 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-13 20:02:22.178090 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-13 20:02:22.178094 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-13 20:02:22.178098 | orchestrator | 2025-05-13 20:02:22.178102 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-05-13 20:02:22.178106 | orchestrator | Tuesday 13 May 2025 20:00:25 +0000 (0:00:01.320) 0:00:22.794 *********** 2025-05-13 20:02:22.178113 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-13 20:02:22.178117 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-13 20:02:22.178121 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-13 20:02:22.178128 | orchestrator | 2025-05-13 20:02:22.178132 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-05-13 20:02:22.178136 | orchestrator | Tuesday 13 May 2025 20:00:27 +0000 (0:00:02.047) 0:00:24.841 *********** 2025-05-13 20:02:22.178139 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-13 20:02:22.178143 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-13 20:02:22.178147 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-13 20:02:22.178151 | orchestrator | 2025-05-13 20:02:22.178154 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-05-13 20:02:22.178158 | orchestrator | Tuesday 13 May 2025 20:00:29 +0000 (0:00:01.740) 0:00:26.582 *********** 2025-05-13 20:02:22.178162 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-13 20:02:22.178166 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-13 20:02:22.178170 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-13 20:02:22.178174 | orchestrator | 2025-05-13 20:02:22.178177 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-13 20:02:22.178181 | orchestrator | Tuesday 13 May 2025 20:00:31 +0000 (0:00:01.594) 0:00:28.176 *********** 2025-05-13 20:02:22.178185 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:02:22.178189 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:02:22.178192 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:02:22.178196 | orchestrator | 2025-05-13 20:02:22.178200 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-05-13 20:02:22.178204 | orchestrator | Tuesday 13 May 2025 20:00:31 +0000 (0:00:00.435) 0:00:28.612 *********** 2025-05-13 20:02:22.178211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-13 20:02:22.178216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-13 20:02:22.178229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-13 20:02:22.178233 | orchestrator | 2025-05-13 20:02:22.178237 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-05-13 20:02:22.178240 | orchestrator | Tuesday 13 May 2025 20:00:33 +0000 (0:00:01.589) 0:00:30.202 *********** 2025-05-13 20:02:22.178244 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:02:22.178248 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:02:22.178252 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:02:22.178256 | orchestrator | 2025-05-13 20:02:22.178259 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-05-13 20:02:22.178263 | orchestrator | Tuesday 13 May 2025 20:00:34 +0000 (0:00:00.845) 0:00:31.048 *********** 2025-05-13 20:02:22.178267 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:02:22.178271 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:02:22.178274 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:02:22.178278 | orchestrator | 2025-05-13 20:02:22.178282 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-05-13 20:02:22.178285 | orchestrator | Tuesday 13 May 2025 20:00:41 +0000 (0:00:07.548) 0:00:38.596 *********** 2025-05-13 20:02:22.178289 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:02:22.178293 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:02:22.178297 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:02:22.178300 | orchestrator | 2025-05-13 20:02:22.178304 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-13 20:02:22.178308 | orchestrator | 2025-05-13 20:02:22.178312 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-13 20:02:22.178318 | orchestrator | Tuesday 13 May 2025 20:00:42 +0000 (0:00:00.466) 0:00:39.063 *********** 2025-05-13 20:02:22.178322 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:02:22.178326 | orchestrator | 2025-05-13 20:02:22.178330 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-13 20:02:22.178333 | orchestrator | Tuesday 13 May 2025 20:00:42 +0000 (0:00:00.563) 0:00:39.627 *********** 2025-05-13 20:02:22.178337 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:02:22.178341 | orchestrator | 2025-05-13 20:02:22.178344 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-13 20:02:22.178348 | orchestrator | Tuesday 13 May 2025 20:00:42 +0000 (0:00:00.315) 0:00:39.943 *********** 2025-05-13 20:02:22.178352 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:02:22.178355 | orchestrator | 2025-05-13 20:02:22.178359 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-13 20:02:22.178363 | orchestrator | Tuesday 13 May 2025 20:00:45 +0000 (0:00:02.519) 0:00:42.462 *********** 2025-05-13 20:02:22.178367 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:02:22.178370 | orchestrator | 2025-05-13 20:02:22.178374 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-13 20:02:22.178378 | orchestrator | 2025-05-13 20:02:22.178382 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-13 20:02:22.178389 | orchestrator | Tuesday 13 May 2025 20:01:40 +0000 (0:00:54.740) 0:01:37.202 *********** 2025-05-13 20:02:22.178393 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:02:22.178396 | orchestrator | 2025-05-13 20:02:22.178400 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-13 20:02:22.178404 | orchestrator | Tuesday 13 May 2025 20:01:41 +0000 (0:00:01.030) 0:01:38.233 *********** 2025-05-13 20:02:22.178408 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:02:22.178411 | orchestrator | 2025-05-13 20:02:22.178415 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-13 20:02:22.178419 | orchestrator | Tuesday 13 May 2025 20:01:42 +0000 (0:00:01.079) 0:01:39.312 *********** 2025-05-13 20:02:22.178423 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:02:22.178426 | orchestrator | 2025-05-13 20:02:22.178430 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-13 20:02:22.178434 | orchestrator | Tuesday 13 May 2025 20:01:49 +0000 (0:00:06.984) 0:01:46.297 *********** 2025-05-13 20:02:22.178438 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:02:22.178441 | orchestrator | 2025-05-13 20:02:22.178445 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-13 20:02:22.178449 | orchestrator | 2025-05-13 20:02:22.178453 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-13 20:02:22.178456 | orchestrator | Tuesday 13 May 2025 20:01:58 +0000 (0:00:08.739) 0:01:55.037 *********** 2025-05-13 20:02:22.178460 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:02:22.178464 | orchestrator | 2025-05-13 20:02:22.178468 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-13 20:02:22.178472 | orchestrator | Tuesday 13 May 2025 20:01:58 +0000 (0:00:00.670) 0:01:55.708 *********** 2025-05-13 20:02:22.178475 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:02:22.178479 | orchestrator | 2025-05-13 20:02:22.178483 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-13 20:02:22.178486 | orchestrator | Tuesday 13 May 2025 20:01:58 +0000 (0:00:00.268) 0:01:55.977 *********** 2025-05-13 20:02:22.178490 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:02:22.178494 | orchestrator | 2025-05-13 20:02:22.178498 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-13 20:02:22.178505 | orchestrator | Tuesday 13 May 2025 20:02:00 +0000 (0:00:01.708) 0:01:57.685 *********** 2025-05-13 20:02:22.178509 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:02:22.178512 | orchestrator | 2025-05-13 20:02:22.178516 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-05-13 20:02:22.178520 | orchestrator | 2025-05-13 20:02:22.178523 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-05-13 20:02:22.178527 | orchestrator | Tuesday 13 May 2025 20:02:16 +0000 (0:00:15.619) 0:02:13.305 *********** 2025-05-13 20:02:22.178531 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:02:22.178535 | orchestrator | 2025-05-13 20:02:22.178538 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-05-13 20:02:22.178542 | orchestrator | Tuesday 13 May 2025 20:02:17 +0000 (0:00:00.742) 0:02:14.047 *********** 2025-05-13 20:02:22.178546 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-13 20:02:22.178550 | orchestrator | enable_outward_rabbitmq_True 2025-05-13 20:02:22.178553 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-13 20:02:22.178557 | orchestrator | outward_rabbitmq_restart 2025-05-13 20:02:22.178561 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:02:22.178565 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:02:22.178568 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:02:22.178572 | orchestrator | 2025-05-13 20:02:22.178576 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-05-13 20:02:22.178580 | orchestrator | skipping: no hosts matched 2025-05-13 20:02:22.178626 | orchestrator | 2025-05-13 20:02:22.178634 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-05-13 20:02:22.178638 | orchestrator | skipping: no hosts matched 2025-05-13 20:02:22.178641 | orchestrator | 2025-05-13 20:02:22.178645 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-05-13 20:02:22.178649 | orchestrator | skipping: no hosts matched 2025-05-13 20:02:22.178653 | orchestrator | 2025-05-13 20:02:22.178656 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:02:22.178661 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-13 20:02:22.178666 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-13 20:02:22.178672 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 20:02:22.178676 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 20:02:22.178680 | orchestrator | 2025-05-13 20:02:22.178684 | orchestrator | 2025-05-13 20:02:22.178687 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:02:22.178691 | orchestrator | Tuesday 13 May 2025 20:02:19 +0000 (0:00:02.502) 0:02:16.549 *********** 2025-05-13 20:02:22.178695 | orchestrator | =============================================================================== 2025-05-13 20:02:22.178699 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 79.10s 2025-05-13 20:02:22.178703 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 11.21s 2025-05-13 20:02:22.178706 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.55s 2025-05-13 20:02:22.178710 | orchestrator | Check RabbitMQ service -------------------------------------------------- 4.52s 2025-05-13 20:02:22.178714 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.08s 2025-05-13 20:02:22.178718 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.50s 2025-05-13 20:02:22.178721 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.46s 2025-05-13 20:02:22.178725 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.39s 2025-05-13 20:02:22.178729 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.27s 2025-05-13 20:02:22.178733 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.05s 2025-05-13 20:02:22.178736 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.80s 2025-05-13 20:02:22.178740 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.74s 2025-05-13 20:02:22.178744 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.66s 2025-05-13 20:02:22.178748 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.59s 2025-05-13 20:02:22.178751 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.59s 2025-05-13 20:02:22.178755 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.32s 2025-05-13 20:02:22.178759 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.04s 2025-05-13 20:02:22.178762 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.99s 2025-05-13 20:02:22.178766 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.86s 2025-05-13 20:02:22.178770 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.85s 2025-05-13 20:02:22.178774 | orchestrator | 2025-05-13 20:02:22 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:02:25.221950 | orchestrator | 2025-05-13 20:02:25 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:02:25.224655 | orchestrator | 2025-05-13 20:02:25 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:02:25.225491 | orchestrator | 2025-05-13 20:02:25 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:02:25.225639 | orchestrator | 2025-05-13 20:02:25 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:02:28.263466 | orchestrator | 2025-05-13 20:02:28 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:02:28.263777 | orchestrator | 2025-05-13 20:02:28 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:02:28.267684 | orchestrator | 2025-05-13 20:02:28 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:02:28.267714 | orchestrator | 2025-05-13 20:02:28 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:02:31.318894 | orchestrator | 2025-05-13 20:02:31 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:02:31.319118 | orchestrator | 2025-05-13 20:02:31 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:02:31.320214 | orchestrator | 2025-05-13 20:02:31 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:02:31.320236 | orchestrator | 2025-05-13 20:02:31 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:02:34.365907 | orchestrator | 2025-05-13 20:02:34 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:02:34.367338 | orchestrator | 2025-05-13 20:02:34 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:02:34.368712 | orchestrator | 2025-05-13 20:02:34 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:02:34.371092 | orchestrator | 2025-05-13 20:02:34 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:02:37.413080 | orchestrator | 2025-05-13 20:02:37 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:02:37.415684 | orchestrator | 2025-05-13 20:02:37 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:02:37.417325 | orchestrator | 2025-05-13 20:02:37 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:02:37.417365 | orchestrator | 2025-05-13 20:02:37 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:02:40.479991 | orchestrator | 2025-05-13 20:02:40 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:02:40.481769 | orchestrator | 2025-05-13 20:02:40 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:02:40.483449 | orchestrator | 2025-05-13 20:02:40 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:02:40.483821 | orchestrator | 2025-05-13 20:02:40 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:02:43.540468 | orchestrator | 2025-05-13 20:02:43 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:02:43.542174 | orchestrator | 2025-05-13 20:02:43 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:02:43.543908 | orchestrator | 2025-05-13 20:02:43 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:02:43.543949 | orchestrator | 2025-05-13 20:02:43 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:02:46.601204 | orchestrator | 2025-05-13 20:02:46 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:02:46.601537 | orchestrator | 2025-05-13 20:02:46 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:02:46.602658 | orchestrator | 2025-05-13 20:02:46 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:02:46.602700 | orchestrator | 2025-05-13 20:02:46 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:02:49.653926 | orchestrator | 2025-05-13 20:02:49 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:02:49.661135 | orchestrator | 2025-05-13 20:02:49 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:02:49.662372 | orchestrator | 2025-05-13 20:02:49 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:02:49.662479 | orchestrator | 2025-05-13 20:02:49 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:02:52.718423 | orchestrator | 2025-05-13 20:02:52 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:02:52.720000 | orchestrator | 2025-05-13 20:02:52 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:02:52.722005 | orchestrator | 2025-05-13 20:02:52 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:02:52.722374 | orchestrator | 2025-05-13 20:02:52 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:02:55.775598 | orchestrator | 2025-05-13 20:02:55 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:02:55.776041 | orchestrator | 2025-05-13 20:02:55 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:02:55.777019 | orchestrator | 2025-05-13 20:02:55 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:02:55.777885 | orchestrator | 2025-05-13 20:02:55 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:02:58.827191 | orchestrator | 2025-05-13 20:02:58 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:02:58.827777 | orchestrator | 2025-05-13 20:02:58 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:02:58.828601 | orchestrator | 2025-05-13 20:02:58 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:02:58.828884 | orchestrator | 2025-05-13 20:02:58 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:03:01.880180 | orchestrator | 2025-05-13 20:03:01 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:03:01.883076 | orchestrator | 2025-05-13 20:03:01 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:03:01.885527 | orchestrator | 2025-05-13 20:03:01 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:03:01.885722 | orchestrator | 2025-05-13 20:03:01 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:03:04.941825 | orchestrator | 2025-05-13 20:03:04 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:03:04.941950 | orchestrator | 2025-05-13 20:03:04 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:03:04.942200 | orchestrator | 2025-05-13 20:03:04 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:03:04.942226 | orchestrator | 2025-05-13 20:03:04 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:03:07.992907 | orchestrator | 2025-05-13 20:03:07 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:03:07.994363 | orchestrator | 2025-05-13 20:03:07 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:03:07.997204 | orchestrator | 2025-05-13 20:03:07 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:03:07.997326 | orchestrator | 2025-05-13 20:03:07 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:03:11.053841 | orchestrator | 2025-05-13 20:03:11 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:03:11.056306 | orchestrator | 2025-05-13 20:03:11 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:03:11.056344 | orchestrator | 2025-05-13 20:03:11 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:03:11.056356 | orchestrator | 2025-05-13 20:03:11 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:03:14.110196 | orchestrator | 2025-05-13 20:03:14 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:03:14.112806 | orchestrator | 2025-05-13 20:03:14 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:03:14.115017 | orchestrator | 2025-05-13 20:03:14 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:03:14.115053 | orchestrator | 2025-05-13 20:03:14 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:03:17.158396 | orchestrator | 2025-05-13 20:03:17 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:03:17.159400 | orchestrator | 2025-05-13 20:03:17 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:03:17.161150 | orchestrator | 2025-05-13 20:03:17 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:03:17.161681 | orchestrator | 2025-05-13 20:03:17 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:03:20.209268 | orchestrator | 2025-05-13 20:03:20 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:03:20.210755 | orchestrator | 2025-05-13 20:03:20 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:03:20.213241 | orchestrator | 2025-05-13 20:03:20 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:03:20.213770 | orchestrator | 2025-05-13 20:03:20 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:03:23.254365 | orchestrator | 2025-05-13 20:03:23 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:03:23.256098 | orchestrator | 2025-05-13 20:03:23 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:03:23.256132 | orchestrator | 2025-05-13 20:03:23 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:03:23.256145 | orchestrator | 2025-05-13 20:03:23 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:03:26.300090 | orchestrator | 2025-05-13 20:03:26 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:03:26.300274 | orchestrator | 2025-05-13 20:03:26 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:03:26.301582 | orchestrator | 2025-05-13 20:03:26 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:03:26.301752 | orchestrator | 2025-05-13 20:03:26 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:03:29.345947 | orchestrator | 2025-05-13 20:03:29 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state STARTED 2025-05-13 20:03:29.346549 | orchestrator | 2025-05-13 20:03:29 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:03:29.347735 | orchestrator | 2025-05-13 20:03:29 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:03:29.349101 | orchestrator | 2025-05-13 20:03:29 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:03:32.407897 | orchestrator | 2025-05-13 20:03:32 | INFO  | Task a6244877-780c-4e83-80c0-90112de9f198 is in state SUCCESS 2025-05-13 20:03:32.410687 | orchestrator | 2025-05-13 20:03:32.410881 | orchestrator | 2025-05-13 20:03:32.410897 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:03:32.410909 | orchestrator | 2025-05-13 20:03:32.410920 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:03:32.410931 | orchestrator | Tuesday 13 May 2025 20:00:55 +0000 (0:00:00.205) 0:00:00.206 *********** 2025-05-13 20:03:32.410959 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:03:32.410971 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:03:32.410982 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:03:32.410993 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:03:32.411005 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:03:32.411020 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:03:32.411036 | orchestrator | 2025-05-13 20:03:32.411048 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:03:32.411060 | orchestrator | Tuesday 13 May 2025 20:00:56 +0000 (0:00:00.947) 0:00:01.153 *********** 2025-05-13 20:03:32.411071 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-05-13 20:03:32.411083 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-05-13 20:03:32.411094 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-05-13 20:03:32.411105 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-05-13 20:03:32.411116 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-05-13 20:03:32.411127 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-05-13 20:03:32.411138 | orchestrator | 2025-05-13 20:03:32.411161 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-05-13 20:03:32.411173 | orchestrator | 2025-05-13 20:03:32.411187 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-05-13 20:03:32.411204 | orchestrator | Tuesday 13 May 2025 20:00:57 +0000 (0:00:00.961) 0:00:02.115 *********** 2025-05-13 20:03:32.411217 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:03:32.411229 | orchestrator | 2025-05-13 20:03:32.411239 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-05-13 20:03:32.411250 | orchestrator | Tuesday 13 May 2025 20:00:59 +0000 (0:00:01.162) 0:00:03.277 *********** 2025-05-13 20:03:32.411264 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411277 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411289 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411367 | orchestrator | 2025-05-13 20:03:32.411391 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-05-13 20:03:32.411403 | orchestrator | Tuesday 13 May 2025 20:01:00 +0000 (0:00:01.010) 0:00:04.288 *********** 2025-05-13 20:03:32.411415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411426 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411437 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411448 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411490 | orchestrator | 2025-05-13 20:03:32.411501 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-05-13 20:03:32.411511 | orchestrator | Tuesday 13 May 2025 20:01:01 +0000 (0:00:01.863) 0:00:06.152 *********** 2025-05-13 20:03:32.411523 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411594 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411672 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411731 | orchestrator | 2025-05-13 20:03:32.411741 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-05-13 20:03:32.411752 | orchestrator | Tuesday 13 May 2025 20:01:03 +0000 (0:00:02.053) 0:00:08.205 *********** 2025-05-13 20:03:32.411764 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411775 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411794 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411878 | orchestrator | 2025-05-13 20:03:32.411895 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-05-13 20:03:32.411918 | orchestrator | Tuesday 13 May 2025 20:01:06 +0000 (0:00:02.635) 0:00:10.840 *********** 2025-05-13 20:03:32.411930 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411941 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411953 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.411994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.412005 | orchestrator | 2025-05-13 20:03:32.412016 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-05-13 20:03:32.412026 | orchestrator | Tuesday 13 May 2025 20:01:08 +0000 (0:00:01.624) 0:00:12.465 *********** 2025-05-13 20:03:32.412038 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:03:32.412049 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:03:32.412060 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:03:32.412071 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:03:32.412082 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:03:32.412092 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:03:32.412103 | orchestrator | 2025-05-13 20:03:32.412114 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-05-13 20:03:32.412125 | orchestrator | Tuesday 13 May 2025 20:01:10 +0000 (0:00:02.357) 0:00:14.822 *********** 2025-05-13 20:03:32.412136 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-05-13 20:03:32.412147 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-05-13 20:03:32.412158 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-05-13 20:03:32.412169 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-05-13 20:03:32.412180 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-05-13 20:03:32.412191 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-05-13 20:03:32.412202 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-13 20:03:32.412212 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-13 20:03:32.412234 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-13 20:03:32.412246 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-13 20:03:32.412257 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-13 20:03:32.412267 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-13 20:03:32.412278 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-13 20:03:32.412290 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-13 20:03:32.412301 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-13 20:03:32.412312 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-13 20:03:32.412330 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-13 20:03:32.412347 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-13 20:03:32.412365 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-13 20:03:32.412384 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-13 20:03:32.412402 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-13 20:03:32.412421 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-13 20:03:32.412432 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-13 20:03:32.412443 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-13 20:03:32.412454 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-13 20:03:32.412465 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-13 20:03:32.412475 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-13 20:03:32.412486 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-13 20:03:32.412497 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-13 20:03:32.412508 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-13 20:03:32.412519 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-13 20:03:32.412530 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-13 20:03:32.412541 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-13 20:03:32.412552 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-13 20:03:32.412563 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-13 20:03:32.412574 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-13 20:03:32.412584 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-13 20:03:32.412595 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-13 20:03:32.412635 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-13 20:03:32.412647 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-13 20:03:32.412658 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-13 20:03:32.412668 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-05-13 20:03:32.412680 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-13 20:03:32.412691 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-05-13 20:03:32.412715 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-05-13 20:03:32.412734 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-05-13 20:03:32.412745 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-05-13 20:03:32.412756 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-13 20:03:32.412767 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-05-13 20:03:32.412778 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-13 20:03:32.412789 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-13 20:03:32.412799 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-13 20:03:32.412810 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-13 20:03:32.412821 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-13 20:03:32.412832 | orchestrator | 2025-05-13 20:03:32.412843 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-13 20:03:32.412854 | orchestrator | Tuesday 13 May 2025 20:01:28 +0000 (0:00:18.077) 0:00:32.899 *********** 2025-05-13 20:03:32.412865 | orchestrator | 2025-05-13 20:03:32.412876 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-13 20:03:32.412886 | orchestrator | Tuesday 13 May 2025 20:01:28 +0000 (0:00:00.067) 0:00:32.967 *********** 2025-05-13 20:03:32.412897 | orchestrator | 2025-05-13 20:03:32.412907 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-13 20:03:32.412918 | orchestrator | Tuesday 13 May 2025 20:01:28 +0000 (0:00:00.144) 0:00:33.111 *********** 2025-05-13 20:03:32.412928 | orchestrator | 2025-05-13 20:03:32.412938 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-13 20:03:32.412949 | orchestrator | Tuesday 13 May 2025 20:01:29 +0000 (0:00:00.180) 0:00:33.292 *********** 2025-05-13 20:03:32.412960 | orchestrator | 2025-05-13 20:03:32.412970 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-13 20:03:32.412980 | orchestrator | Tuesday 13 May 2025 20:01:29 +0000 (0:00:00.197) 0:00:33.490 *********** 2025-05-13 20:03:32.412991 | orchestrator | 2025-05-13 20:03:32.413001 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-13 20:03:32.413012 | orchestrator | Tuesday 13 May 2025 20:01:29 +0000 (0:00:00.146) 0:00:33.636 *********** 2025-05-13 20:03:32.413023 | orchestrator | 2025-05-13 20:03:32.413033 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-05-13 20:03:32.413044 | orchestrator | Tuesday 13 May 2025 20:01:29 +0000 (0:00:00.106) 0:00:33.743 *********** 2025-05-13 20:03:32.413055 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:03:32.413065 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:03:32.413076 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:03:32.413087 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:03:32.413097 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:03:32.413108 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:03:32.413119 | orchestrator | 2025-05-13 20:03:32.413129 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-05-13 20:03:32.413140 | orchestrator | Tuesday 13 May 2025 20:01:31 +0000 (0:00:01.937) 0:00:35.681 *********** 2025-05-13 20:03:32.413151 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:03:32.413162 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:03:32.413172 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:03:32.413183 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:03:32.413201 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:03:32.413211 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:03:32.413222 | orchestrator | 2025-05-13 20:03:32.413232 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-05-13 20:03:32.413243 | orchestrator | 2025-05-13 20:03:32.413254 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-13 20:03:32.413264 | orchestrator | Tuesday 13 May 2025 20:02:08 +0000 (0:00:36.999) 0:01:12.680 *********** 2025-05-13 20:03:32.413275 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:03:32.413286 | orchestrator | 2025-05-13 20:03:32.413296 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-13 20:03:32.413307 | orchestrator | Tuesday 13 May 2025 20:02:08 +0000 (0:00:00.509) 0:01:13.190 *********** 2025-05-13 20:03:32.413317 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:03:32.413328 | orchestrator | 2025-05-13 20:03:32.413339 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-05-13 20:03:32.413349 | orchestrator | Tuesday 13 May 2025 20:02:09 +0000 (0:00:00.727) 0:01:13.917 *********** 2025-05-13 20:03:32.413360 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:03:32.413371 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:03:32.413381 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:03:32.413392 | orchestrator | 2025-05-13 20:03:32.413402 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-05-13 20:03:32.413413 | orchestrator | Tuesday 13 May 2025 20:02:10 +0000 (0:00:00.783) 0:01:14.701 *********** 2025-05-13 20:03:32.413424 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:03:32.413445 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:03:32.413456 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:03:32.413472 | orchestrator | 2025-05-13 20:03:32.413484 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-05-13 20:03:32.413494 | orchestrator | Tuesday 13 May 2025 20:02:10 +0000 (0:00:00.320) 0:01:15.022 *********** 2025-05-13 20:03:32.413505 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:03:32.413516 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:03:32.413527 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:03:32.413537 | orchestrator | 2025-05-13 20:03:32.413548 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-05-13 20:03:32.413559 | orchestrator | Tuesday 13 May 2025 20:02:11 +0000 (0:00:00.363) 0:01:15.385 *********** 2025-05-13 20:03:32.413570 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:03:32.413580 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:03:32.413591 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:03:32.413620 | orchestrator | 2025-05-13 20:03:32.413633 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-05-13 20:03:32.413644 | orchestrator | Tuesday 13 May 2025 20:02:11 +0000 (0:00:00.528) 0:01:15.914 *********** 2025-05-13 20:03:32.413654 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:03:32.413665 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:03:32.413676 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:03:32.413686 | orchestrator | 2025-05-13 20:03:32.413697 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-05-13 20:03:32.413707 | orchestrator | Tuesday 13 May 2025 20:02:11 +0000 (0:00:00.312) 0:01:16.227 *********** 2025-05-13 20:03:32.413718 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:03:32.413729 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:03:32.413740 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:03:32.413750 | orchestrator | 2025-05-13 20:03:32.413761 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-05-13 20:03:32.413772 | orchestrator | Tuesday 13 May 2025 20:02:12 +0000 (0:00:00.300) 0:01:16.527 *********** 2025-05-13 20:03:32.413782 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:03:32.413793 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:03:32.413804 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:03:32.413824 | orchestrator | 2025-05-13 20:03:32.413835 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-05-13 20:03:32.413846 | orchestrator | Tuesday 13 May 2025 20:02:12 +0000 (0:00:00.282) 0:01:16.810 *********** 2025-05-13 20:03:32.413856 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:03:32.413867 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:03:32.413877 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:03:32.413888 | orchestrator | 2025-05-13 20:03:32.413898 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-05-13 20:03:32.413909 | orchestrator | Tuesday 13 May 2025 20:02:13 +0000 (0:00:00.569) 0:01:17.380 *********** 2025-05-13 20:03:32.413919 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:03:32.413930 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:03:32.413941 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:03:32.413951 | orchestrator | 2025-05-13 20:03:32.413962 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-05-13 20:03:32.413972 | orchestrator | Tuesday 13 May 2025 20:02:13 +0000 (0:00:00.303) 0:01:17.683 *********** 2025-05-13 20:03:32.413983 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:03:32.413993 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:03:32.414004 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:03:32.414014 | orchestrator | 2025-05-13 20:03:32.414077 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-05-13 20:03:32.414089 | orchestrator | Tuesday 13 May 2025 20:02:13 +0000 (0:00:00.269) 0:01:17.953 *********** 2025-05-13 20:03:32.414100 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:03:32.414111 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:03:32.414122 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:03:32.414133 | orchestrator | 2025-05-13 20:03:32.414143 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-05-13 20:03:32.414154 | orchestrator | Tuesday 13 May 2025 20:02:13 +0000 (0:00:00.272) 0:01:18.225 *********** 2025-05-13 20:03:32.414165 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:03:32.414176 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:03:32.414186 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:03:32.414197 | orchestrator | 2025-05-13 20:03:32.414208 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-05-13 20:03:32.414219 | orchestrator | Tuesday 13 May 2025 20:02:14 +0000 (0:00:00.355) 0:01:18.581 *********** 2025-05-13 20:03:32.414229 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:03:32.414240 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:03:32.414251 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:03:32.414262 | orchestrator | 2025-05-13 20:03:32.414272 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-05-13 20:03:32.414283 | orchestrator | Tuesday 13 May 2025 20:02:14 +0000 (0:00:00.279) 0:01:18.860 *********** 2025-05-13 20:03:32.414294 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:03:32.414305 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:03:32.414316 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:03:32.414327 | orchestrator | 2025-05-13 20:03:32.414338 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-05-13 20:03:32.414349 | orchestrator | Tuesday 13 May 2025 20:02:14 +0000 (0:00:00.248) 0:01:19.109 *********** 2025-05-13 20:03:32.414359 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:03:32.414370 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:03:32.414381 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:03:32.414391 | orchestrator | 2025-05-13 20:03:32.414402 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-05-13 20:03:32.414413 | orchestrator | Tuesday 13 May 2025 20:02:15 +0000 (0:00:00.248) 0:01:19.357 *********** 2025-05-13 20:03:32.414424 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:03:32.414435 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:03:32.414446 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:03:32.414464 | orchestrator | 2025-05-13 20:03:32.414475 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-05-13 20:03:32.414485 | orchestrator | Tuesday 13 May 2025 20:02:15 +0000 (0:00:00.495) 0:01:19.852 *********** 2025-05-13 20:03:32.414496 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:03:32.414508 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:03:32.414526 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:03:32.414537 | orchestrator | 2025-05-13 20:03:32.414548 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-13 20:03:32.414559 | orchestrator | Tuesday 13 May 2025 20:02:15 +0000 (0:00:00.274) 0:01:20.127 *********** 2025-05-13 20:03:32.414570 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:03:32.414581 | orchestrator | 2025-05-13 20:03:32.414592 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-05-13 20:03:32.414643 | orchestrator | Tuesday 13 May 2025 20:02:16 +0000 (0:00:00.513) 0:01:20.641 *********** 2025-05-13 20:03:32.414665 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:03:32.414682 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:03:32.414701 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:03:32.414712 | orchestrator | 2025-05-13 20:03:32.414723 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-05-13 20:03:32.414734 | orchestrator | Tuesday 13 May 2025 20:02:17 +0000 (0:00:00.868) 0:01:21.509 *********** 2025-05-13 20:03:32.414745 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:03:32.414758 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:03:32.414776 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:03:32.414793 | orchestrator | 2025-05-13 20:03:32.414811 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-05-13 20:03:32.414828 | orchestrator | Tuesday 13 May 2025 20:02:17 +0000 (0:00:00.443) 0:01:21.953 *********** 2025-05-13 20:03:32.414846 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:03:32.414863 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:03:32.414880 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:03:32.414900 | orchestrator | 2025-05-13 20:03:32.414918 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-05-13 20:03:32.414938 | orchestrator | Tuesday 13 May 2025 20:02:18 +0000 (0:00:00.354) 0:01:22.308 *********** 2025-05-13 20:03:32.414950 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:03:32.414961 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:03:32.414972 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:03:32.414982 | orchestrator | 2025-05-13 20:03:32.414993 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-05-13 20:03:32.415004 | orchestrator | Tuesday 13 May 2025 20:02:18 +0000 (0:00:00.343) 0:01:22.651 *********** 2025-05-13 20:03:32.415015 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:03:32.415026 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:03:32.415037 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:03:32.415047 | orchestrator | 2025-05-13 20:03:32.415058 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-05-13 20:03:32.415068 | orchestrator | Tuesday 13 May 2025 20:02:18 +0000 (0:00:00.507) 0:01:23.158 *********** 2025-05-13 20:03:32.415079 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:03:32.415090 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:03:32.415100 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:03:32.415111 | orchestrator | 2025-05-13 20:03:32.415121 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-05-13 20:03:32.415132 | orchestrator | Tuesday 13 May 2025 20:02:19 +0000 (0:00:00.332) 0:01:23.491 *********** 2025-05-13 20:03:32.415142 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:03:32.415153 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:03:32.415164 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:03:32.415174 | orchestrator | 2025-05-13 20:03:32.415185 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-05-13 20:03:32.415207 | orchestrator | Tuesday 13 May 2025 20:02:19 +0000 (0:00:00.385) 0:01:23.877 *********** 2025-05-13 20:03:32.415217 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:03:32.415228 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:03:32.415239 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:03:32.415250 | orchestrator | 2025-05-13 20:03:32.415261 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-13 20:03:32.415272 | orchestrator | Tuesday 13 May 2025 20:02:20 +0000 (0:00:00.539) 0:01:24.417 *********** 2025-05-13 20:03:32.415284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415519 | orchestrator | 2025-05-13 20:03:32.415530 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-13 20:03:32.415541 | orchestrator | Tuesday 13 May 2025 20:02:21 +0000 (0:00:01.736) 0:01:26.153 *********** 2025-05-13 20:03:32.415553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415703 | orchestrator | 2025-05-13 20:03:32.415713 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-13 20:03:32.415724 | orchestrator | Tuesday 13 May 2025 20:02:25 +0000 (0:00:03.705) 0:01:29.859 *********** 2025-05-13 20:03:32.415735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.415856 | orchestrator | 2025-05-13 20:03:32.415866 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-13 20:03:32.415877 | orchestrator | Tuesday 13 May 2025 20:02:27 +0000 (0:00:02.082) 0:01:31.941 *********** 2025-05-13 20:03:32.415888 | orchestrator | 2025-05-13 20:03:32.415899 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-13 20:03:32.415910 | orchestrator | Tuesday 13 May 2025 20:02:27 +0000 (0:00:00.069) 0:01:32.011 *********** 2025-05-13 20:03:32.415921 | orchestrator | 2025-05-13 20:03:32.415932 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-13 20:03:32.415942 | orchestrator | Tuesday 13 May 2025 20:02:27 +0000 (0:00:00.072) 0:01:32.083 *********** 2025-05-13 20:03:32.415953 | orchestrator | 2025-05-13 20:03:32.415964 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-13 20:03:32.415975 | orchestrator | Tuesday 13 May 2025 20:02:27 +0000 (0:00:00.072) 0:01:32.156 *********** 2025-05-13 20:03:32.415986 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:03:32.415997 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:03:32.416008 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:03:32.416019 | orchestrator | 2025-05-13 20:03:32.416029 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-13 20:03:32.416040 | orchestrator | Tuesday 13 May 2025 20:02:36 +0000 (0:00:08.459) 0:01:40.616 *********** 2025-05-13 20:03:32.416055 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:03:32.416074 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:03:32.416103 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:03:32.416122 | orchestrator | 2025-05-13 20:03:32.416140 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-13 20:03:32.416158 | orchestrator | Tuesday 13 May 2025 20:02:43 +0000 (0:00:07.460) 0:01:48.076 *********** 2025-05-13 20:03:32.416175 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:03:32.416195 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:03:32.416213 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:03:32.416232 | orchestrator | 2025-05-13 20:03:32.416246 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-13 20:03:32.416257 | orchestrator | Tuesday 13 May 2025 20:02:51 +0000 (0:00:07.629) 0:01:55.706 *********** 2025-05-13 20:03:32.416267 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:03:32.416278 | orchestrator | 2025-05-13 20:03:32.416289 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-13 20:03:32.416300 | orchestrator | Tuesday 13 May 2025 20:02:51 +0000 (0:00:00.120) 0:01:55.827 *********** 2025-05-13 20:03:32.416311 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:03:32.416321 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:03:32.416332 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:03:32.416343 | orchestrator | 2025-05-13 20:03:32.416354 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-13 20:03:32.416365 | orchestrator | Tuesday 13 May 2025 20:02:52 +0000 (0:00:00.784) 0:01:56.612 *********** 2025-05-13 20:03:32.416375 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:03:32.416386 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:03:32.416397 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:03:32.416407 | orchestrator | 2025-05-13 20:03:32.416418 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-13 20:03:32.416429 | orchestrator | Tuesday 13 May 2025 20:02:53 +0000 (0:00:00.973) 0:01:57.586 *********** 2025-05-13 20:03:32.416440 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:03:32.416450 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:03:32.416461 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:03:32.416472 | orchestrator | 2025-05-13 20:03:32.416483 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-13 20:03:32.416494 | orchestrator | Tuesday 13 May 2025 20:02:54 +0000 (0:00:00.805) 0:01:58.391 *********** 2025-05-13 20:03:32.416515 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:03:32.416526 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:03:32.416537 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:03:32.416548 | orchestrator | 2025-05-13 20:03:32.416558 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-13 20:03:32.416569 | orchestrator | Tuesday 13 May 2025 20:02:54 +0000 (0:00:00.623) 0:01:59.014 *********** 2025-05-13 20:03:32.416586 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:03:32.416597 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:03:32.416650 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:03:32.416662 | orchestrator | 2025-05-13 20:03:32.416673 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-13 20:03:32.416683 | orchestrator | Tuesday 13 May 2025 20:02:55 +0000 (0:00:00.833) 0:01:59.848 *********** 2025-05-13 20:03:32.416694 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:03:32.416705 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:03:32.416715 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:03:32.416726 | orchestrator | 2025-05-13 20:03:32.416737 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-05-13 20:03:32.416747 | orchestrator | Tuesday 13 May 2025 20:02:56 +0000 (0:00:01.231) 0:02:01.079 *********** 2025-05-13 20:03:32.416758 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:03:32.416769 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:03:32.416779 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:03:32.416790 | orchestrator | 2025-05-13 20:03:32.416801 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-13 20:03:32.416811 | orchestrator | Tuesday 13 May 2025 20:02:57 +0000 (0:00:00.299) 0:02:01.379 *********** 2025-05-13 20:03:32.416823 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.416834 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.416846 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.416857 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.416870 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.416881 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.416900 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.416911 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.416935 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.416947 | orchestrator | 2025-05-13 20:03:32.416957 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-13 20:03:32.416968 | orchestrator | Tuesday 13 May 2025 20:02:58 +0000 (0:00:01.394) 0:02:02.774 *********** 2025-05-13 20:03:32.416980 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.416991 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.417003 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.417015 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.417026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.417037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.417056 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.417068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.417080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.417091 | orchestrator | 2025-05-13 20:03:32.417102 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-13 20:03:32.417113 | orchestrator | Tuesday 13 May 2025 20:03:02 +0000 (0:00:03.831) 0:02:06.605 *********** 2025-05-13 20:03:32.417135 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.417147 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.417159 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.417170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.417182 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.417193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.417212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.417231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.417261 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:03:32.417282 | orchestrator | 2025-05-13 20:03:32.417300 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-13 20:03:32.417317 | orchestrator | Tuesday 13 May 2025 20:03:05 +0000 (0:00:03.064) 0:02:09.670 *********** 2025-05-13 20:03:32.417335 | orchestrator | 2025-05-13 20:03:32.417354 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-13 20:03:32.417372 | orchestrator | Tuesday 13 May 2025 20:03:05 +0000 (0:00:00.071) 0:02:09.742 *********** 2025-05-13 20:03:32.417391 | orchestrator | 2025-05-13 20:03:32.417403 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-13 20:03:32.417413 | orchestrator | Tuesday 13 May 2025 20:03:05 +0000 (0:00:00.065) 0:02:09.807 *********** 2025-05-13 20:03:32.417424 | orchestrator | 2025-05-13 20:03:32.417435 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-13 20:03:32.417445 | orchestrator | Tuesday 13 May 2025 20:03:05 +0000 (0:00:00.065) 0:02:09.873 *********** 2025-05-13 20:03:32.417456 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:03:32.417477 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:03:32.417488 | orchestrator | 2025-05-13 20:03:32.417507 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-13 20:03:32.417519 | orchestrator | Tuesday 13 May 2025 20:03:12 +0000 (0:00:06.416) 0:02:16.289 *********** 2025-05-13 20:03:32.417529 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:03:32.417540 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:03:32.417551 | orchestrator | 2025-05-13 20:03:32.417561 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-13 20:03:32.417572 | orchestrator | Tuesday 13 May 2025 20:03:18 +0000 (0:00:06.103) 0:02:22.393 *********** 2025-05-13 20:03:32.417583 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:03:32.417594 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:03:32.417688 | orchestrator | 2025-05-13 20:03:32.417701 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-13 20:03:32.417712 | orchestrator | Tuesday 13 May 2025 20:03:24 +0000 (0:00:06.116) 0:02:28.510 *********** 2025-05-13 20:03:32.417722 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:03:32.417733 | orchestrator | 2025-05-13 20:03:32.417744 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-13 20:03:32.417755 | orchestrator | Tuesday 13 May 2025 20:03:24 +0000 (0:00:00.172) 0:02:28.682 *********** 2025-05-13 20:03:32.417765 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:03:32.417776 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:03:32.417787 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:03:32.417798 | orchestrator | 2025-05-13 20:03:32.417808 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-13 20:03:32.417829 | orchestrator | Tuesday 13 May 2025 20:03:25 +0000 (0:00:01.071) 0:02:29.754 *********** 2025-05-13 20:03:32.417840 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:03:32.417851 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:03:32.417862 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:03:32.417873 | orchestrator | 2025-05-13 20:03:32.417883 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-13 20:03:32.417894 | orchestrator | Tuesday 13 May 2025 20:03:26 +0000 (0:00:00.645) 0:02:30.399 *********** 2025-05-13 20:03:32.417905 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:03:32.417915 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:03:32.417932 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:03:32.417951 | orchestrator | 2025-05-13 20:03:32.417970 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-13 20:03:32.417988 | orchestrator | Tuesday 13 May 2025 20:03:26 +0000 (0:00:00.817) 0:02:31.217 *********** 2025-05-13 20:03:32.418005 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:03:32.418093 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:03:32.418113 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:03:32.418131 | orchestrator | 2025-05-13 20:03:32.418140 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-13 20:03:32.418147 | orchestrator | Tuesday 13 May 2025 20:03:27 +0000 (0:00:00.724) 0:02:31.941 *********** 2025-05-13 20:03:32.418155 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:03:32.418163 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:03:32.418171 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:03:32.418179 | orchestrator | 2025-05-13 20:03:32.418186 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-13 20:03:32.418194 | orchestrator | Tuesday 13 May 2025 20:03:28 +0000 (0:00:01.024) 0:02:32.966 *********** 2025-05-13 20:03:32.418202 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:03:32.418210 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:03:32.418218 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:03:32.418225 | orchestrator | 2025-05-13 20:03:32.418233 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:03:32.418241 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-13 20:03:32.418250 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-13 20:03:32.418258 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-13 20:03:32.418266 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:03:32.418274 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:03:32.418282 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:03:32.418289 | orchestrator | 2025-05-13 20:03:32.418297 | orchestrator | 2025-05-13 20:03:32.418306 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:03:32.418314 | orchestrator | Tuesday 13 May 2025 20:03:29 +0000 (0:00:00.839) 0:02:33.806 *********** 2025-05-13 20:03:32.418321 | orchestrator | =============================================================================== 2025-05-13 20:03:32.418329 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 37.00s 2025-05-13 20:03:32.418337 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.08s 2025-05-13 20:03:32.418345 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.88s 2025-05-13 20:03:32.418353 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.75s 2025-05-13 20:03:32.418368 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.56s 2025-05-13 20:03:32.418376 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.83s 2025-05-13 20:03:32.418389 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.71s 2025-05-13 20:03:32.418405 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.06s 2025-05-13 20:03:32.418413 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.64s 2025-05-13 20:03:32.418420 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.36s 2025-05-13 20:03:32.418428 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.08s 2025-05-13 20:03:32.418436 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 2.05s 2025-05-13 20:03:32.418443 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.94s 2025-05-13 20:03:32.418451 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.86s 2025-05-13 20:03:32.418459 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.74s 2025-05-13 20:03:32.418466 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.62s 2025-05-13 20:03:32.418474 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.39s 2025-05-13 20:03:32.418481 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.23s 2025-05-13 20:03:32.418489 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.16s 2025-05-13 20:03:32.418497 | orchestrator | ovn-db : Get OVN_Northbound cluster leader ------------------------------ 1.07s 2025-05-13 20:03:32.418505 | orchestrator | 2025-05-13 20:03:32 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:03:32.418513 | orchestrator | 2025-05-13 20:03:32 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:03:32.418521 | orchestrator | 2025-05-13 20:03:32 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:03:35.463657 | orchestrator | 2025-05-13 20:03:35 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:03:35.463775 | orchestrator | 2025-05-13 20:03:35 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:03:35.463791 | orchestrator | 2025-05-13 20:03:35 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:03:38.508401 | orchestrator | 2025-05-13 20:03:38 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:03:38.510357 | orchestrator | 2025-05-13 20:03:38 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:03:38.510407 | orchestrator | 2025-05-13 20:03:38 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:03:41.558330 | orchestrator | 2025-05-13 20:03:41 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:03:41.559497 | orchestrator | 2025-05-13 20:03:41 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:03:41.559536 | orchestrator | 2025-05-13 20:03:41 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:03:44.600802 | orchestrator | 2025-05-13 20:03:44 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:03:44.602255 | orchestrator | 2025-05-13 20:03:44 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:03:44.602484 | orchestrator | 2025-05-13 20:03:44 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:03:47.651501 | orchestrator | 2025-05-13 20:03:47 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:03:47.652747 | orchestrator | 2025-05-13 20:03:47 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:03:47.653021 | orchestrator | 2025-05-13 20:03:47 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:03:50.696876 | orchestrator | 2025-05-13 20:03:50 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:03:50.697550 | orchestrator | 2025-05-13 20:03:50 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:03:50.697834 | orchestrator | 2025-05-13 20:03:50 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:03:53.750633 | orchestrator | 2025-05-13 20:03:53 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:03:53.750722 | orchestrator | 2025-05-13 20:03:53 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:03:53.750736 | orchestrator | 2025-05-13 20:03:53 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:03:56.794271 | orchestrator | 2025-05-13 20:03:56 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:03:56.795854 | orchestrator | 2025-05-13 20:03:56 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:03:56.795878 | orchestrator | 2025-05-13 20:03:56 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:03:59.854713 | orchestrator | 2025-05-13 20:03:59 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:03:59.856815 | orchestrator | 2025-05-13 20:03:59 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:03:59.856934 | orchestrator | 2025-05-13 20:03:59 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:04:02.899201 | orchestrator | 2025-05-13 20:04:02 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:04:02.900405 | orchestrator | 2025-05-13 20:04:02 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:04:02.900440 | orchestrator | 2025-05-13 20:04:02 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:04:05.947424 | orchestrator | 2025-05-13 20:04:05 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:04:05.949850 | orchestrator | 2025-05-13 20:04:05 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:04:05.949994 | orchestrator | 2025-05-13 20:04:05 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:04:08.993995 | orchestrator | 2025-05-13 20:04:08 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:04:08.994420 | orchestrator | 2025-05-13 20:04:08 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:04:08.994444 | orchestrator | 2025-05-13 20:04:08 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:04:12.052078 | orchestrator | 2025-05-13 20:04:12 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:04:12.054449 | orchestrator | 2025-05-13 20:04:12 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:04:12.054731 | orchestrator | 2025-05-13 20:04:12 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:04:15.093270 | orchestrator | 2025-05-13 20:04:15 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:04:15.093857 | orchestrator | 2025-05-13 20:04:15 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:04:15.093889 | orchestrator | 2025-05-13 20:04:15 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:04:18.129825 | orchestrator | 2025-05-13 20:04:18 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:04:18.129970 | orchestrator | 2025-05-13 20:04:18 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:04:18.129985 | orchestrator | 2025-05-13 20:04:18 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:04:21.182953 | orchestrator | 2025-05-13 20:04:21 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:04:21.183089 | orchestrator | 2025-05-13 20:04:21 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:04:21.183120 | orchestrator | 2025-05-13 20:04:21 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:04:24.233856 | orchestrator | 2025-05-13 20:04:24 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:04:24.236666 | orchestrator | 2025-05-13 20:04:24 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:04:24.237027 | orchestrator | 2025-05-13 20:04:24 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:04:27.283918 | orchestrator | 2025-05-13 20:04:27 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:04:27.285412 | orchestrator | 2025-05-13 20:04:27 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:04:27.285615 | orchestrator | 2025-05-13 20:04:27 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:04:30.355329 | orchestrator | 2025-05-13 20:04:30 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:04:30.357501 | orchestrator | 2025-05-13 20:04:30 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:04:30.358407 | orchestrator | 2025-05-13 20:04:30 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:04:33.411448 | orchestrator | 2025-05-13 20:04:33 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:04:33.413402 | orchestrator | 2025-05-13 20:04:33 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:04:33.413463 | orchestrator | 2025-05-13 20:04:33 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:04:36.469291 | orchestrator | 2025-05-13 20:04:36 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:04:36.470879 | orchestrator | 2025-05-13 20:04:36 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:04:36.471415 | orchestrator | 2025-05-13 20:04:36 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:04:39.515273 | orchestrator | 2025-05-13 20:04:39 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:04:39.520898 | orchestrator | 2025-05-13 20:04:39 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:04:39.520950 | orchestrator | 2025-05-13 20:04:39 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:04:42.571400 | orchestrator | 2025-05-13 20:04:42 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:04:42.574207 | orchestrator | 2025-05-13 20:04:42 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:04:42.574244 | orchestrator | 2025-05-13 20:04:42 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:04:45.616944 | orchestrator | 2025-05-13 20:04:45 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:04:45.618804 | orchestrator | 2025-05-13 20:04:45 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:04:45.622209 | orchestrator | 2025-05-13 20:04:45 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:04:48.674316 | orchestrator | 2025-05-13 20:04:48 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:04:48.675827 | orchestrator | 2025-05-13 20:04:48 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:04:48.675865 | orchestrator | 2025-05-13 20:04:48 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:04:51.747285 | orchestrator | 2025-05-13 20:04:51 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:04:51.750967 | orchestrator | 2025-05-13 20:04:51 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:04:51.751013 | orchestrator | 2025-05-13 20:04:51 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:04:54.804326 | orchestrator | 2025-05-13 20:04:54 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:04:54.804440 | orchestrator | 2025-05-13 20:04:54 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:04:54.804464 | orchestrator | 2025-05-13 20:04:54 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:04:57.856771 | orchestrator | 2025-05-13 20:04:57 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:04:57.858172 | orchestrator | 2025-05-13 20:04:57 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:04:57.858321 | orchestrator | 2025-05-13 20:04:57 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:05:00.905303 | orchestrator | 2025-05-13 20:05:00 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:05:00.906290 | orchestrator | 2025-05-13 20:05:00 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:05:00.906697 | orchestrator | 2025-05-13 20:05:00 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:05:03.955352 | orchestrator | 2025-05-13 20:05:03 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:05:03.957157 | orchestrator | 2025-05-13 20:05:03 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:05:03.957193 | orchestrator | 2025-05-13 20:05:03 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:05:07.010144 | orchestrator | 2025-05-13 20:05:07 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:05:07.010262 | orchestrator | 2025-05-13 20:05:07 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:05:07.010277 | orchestrator | 2025-05-13 20:05:07 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:05:10.058800 | orchestrator | 2025-05-13 20:05:10 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:05:10.061700 | orchestrator | 2025-05-13 20:05:10 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:05:10.061746 | orchestrator | 2025-05-13 20:05:10 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:05:13.112659 | orchestrator | 2025-05-13 20:05:13 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:05:13.113816 | orchestrator | 2025-05-13 20:05:13 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:05:13.113856 | orchestrator | 2025-05-13 20:05:13 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:05:16.165464 | orchestrator | 2025-05-13 20:05:16 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:05:16.166902 | orchestrator | 2025-05-13 20:05:16 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:05:16.166967 | orchestrator | 2025-05-13 20:05:16 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:05:19.221788 | orchestrator | 2025-05-13 20:05:19 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:05:19.223101 | orchestrator | 2025-05-13 20:05:19 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:05:19.223140 | orchestrator | 2025-05-13 20:05:19 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:05:22.273975 | orchestrator | 2025-05-13 20:05:22 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:05:22.274719 | orchestrator | 2025-05-13 20:05:22 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:05:22.274753 | orchestrator | 2025-05-13 20:05:22 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:05:25.323940 | orchestrator | 2025-05-13 20:05:25 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:05:25.324158 | orchestrator | 2025-05-13 20:05:25 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:05:25.324195 | orchestrator | 2025-05-13 20:05:25 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:05:28.365976 | orchestrator | 2025-05-13 20:05:28 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:05:28.367444 | orchestrator | 2025-05-13 20:05:28 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:05:28.367494 | orchestrator | 2025-05-13 20:05:28 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:05:31.421043 | orchestrator | 2025-05-13 20:05:31 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:05:31.421152 | orchestrator | 2025-05-13 20:05:31 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:05:31.421167 | orchestrator | 2025-05-13 20:05:31 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:05:34.469165 | orchestrator | 2025-05-13 20:05:34 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:05:34.469276 | orchestrator | 2025-05-13 20:05:34 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:05:34.469290 | orchestrator | 2025-05-13 20:05:34 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:05:37.516819 | orchestrator | 2025-05-13 20:05:37 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:05:37.518297 | orchestrator | 2025-05-13 20:05:37 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:05:37.518407 | orchestrator | 2025-05-13 20:05:37 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:05:40.567627 | orchestrator | 2025-05-13 20:05:40 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:05:40.568914 | orchestrator | 2025-05-13 20:05:40 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:05:40.569122 | orchestrator | 2025-05-13 20:05:40 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:05:43.627402 | orchestrator | 2025-05-13 20:05:43 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:05:43.627692 | orchestrator | 2025-05-13 20:05:43 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:05:43.627718 | orchestrator | 2025-05-13 20:05:43 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:05:46.675626 | orchestrator | 2025-05-13 20:05:46 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:05:46.677635 | orchestrator | 2025-05-13 20:05:46 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:05:46.677711 | orchestrator | 2025-05-13 20:05:46 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:05:49.737188 | orchestrator | 2025-05-13 20:05:49 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:05:49.738955 | orchestrator | 2025-05-13 20:05:49 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:05:49.738993 | orchestrator | 2025-05-13 20:05:49 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:05:52.793284 | orchestrator | 2025-05-13 20:05:52 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:05:52.795881 | orchestrator | 2025-05-13 20:05:52 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:05:52.796694 | orchestrator | 2025-05-13 20:05:52 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:05:55.845108 | orchestrator | 2025-05-13 20:05:55 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:05:55.845829 | orchestrator | 2025-05-13 20:05:55 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:05:55.845856 | orchestrator | 2025-05-13 20:05:55 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:05:58.897655 | orchestrator | 2025-05-13 20:05:58 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:05:58.899160 | orchestrator | 2025-05-13 20:05:58 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:05:58.899337 | orchestrator | 2025-05-13 20:05:58 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:06:01.947541 | orchestrator | 2025-05-13 20:06:01 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:06:01.947936 | orchestrator | 2025-05-13 20:06:01 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state STARTED 2025-05-13 20:06:01.948132 | orchestrator | 2025-05-13 20:06:01 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:06:04.999299 | orchestrator | 2025-05-13 20:06:04 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:06:05.000435 | orchestrator | 2025-05-13 20:06:04 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:06:05.001501 | orchestrator | 2025-05-13 20:06:05 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:06:05.009379 | orchestrator | 2025-05-13 20:06:05 | INFO  | Task 2e907683-bfd0-484b-b020-eb677e5887f1 is in state SUCCESS 2025-05-13 20:06:05.010342 | orchestrator | 2025-05-13 20:06:05.010378 | orchestrator | 2025-05-13 20:06:05.010389 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:06:05.010399 | orchestrator | 2025-05-13 20:06:05.010409 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:06:05.010418 | orchestrator | Tuesday 13 May 2025 19:59:41 +0000 (0:00:00.769) 0:00:00.769 *********** 2025-05-13 20:06:05.010427 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:06:05.010437 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:06:05.010445 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:06:05.010454 | orchestrator | 2025-05-13 20:06:05.010482 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:06:05.010492 | orchestrator | Tuesday 13 May 2025 19:59:41 +0000 (0:00:00.598) 0:00:01.368 *********** 2025-05-13 20:06:05.010502 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-05-13 20:06:05.010511 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-05-13 20:06:05.010544 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-05-13 20:06:05.010554 | orchestrator | 2025-05-13 20:06:05.010563 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-05-13 20:06:05.010571 | orchestrator | 2025-05-13 20:06:05.010580 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-13 20:06:05.010589 | orchestrator | Tuesday 13 May 2025 19:59:42 +0000 (0:00:00.848) 0:00:02.217 *********** 2025-05-13 20:06:05.010598 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:06:05.010606 | orchestrator | 2025-05-13 20:06:05.010616 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-05-13 20:06:05.010625 | orchestrator | Tuesday 13 May 2025 19:59:43 +0000 (0:00:01.216) 0:00:03.434 *********** 2025-05-13 20:06:05.010634 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:06:05.010642 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:06:05.010651 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:06:05.010659 | orchestrator | 2025-05-13 20:06:05.010668 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-13 20:06:05.010677 | orchestrator | Tuesday 13 May 2025 19:59:44 +0000 (0:00:00.938) 0:00:04.372 *********** 2025-05-13 20:06:05.010685 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:06:05.010694 | orchestrator | 2025-05-13 20:06:05.010702 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-05-13 20:06:05.010747 | orchestrator | Tuesday 13 May 2025 19:59:45 +0000 (0:00:01.033) 0:00:05.406 *********** 2025-05-13 20:06:05.010758 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:06:05.010767 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:06:05.010776 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:06:05.010784 | orchestrator | 2025-05-13 20:06:05.010793 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-05-13 20:06:05.010801 | orchestrator | Tuesday 13 May 2025 19:59:47 +0000 (0:00:01.637) 0:00:07.043 *********** 2025-05-13 20:06:05.010810 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-13 20:06:05.010894 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-13 20:06:05.010990 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-13 20:06:05.011002 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-13 20:06:05.011012 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-13 20:06:05.011022 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-13 20:06:05.011033 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-13 20:06:05.011044 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-13 20:06:05.011055 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-13 20:06:05.011065 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-13 20:06:05.011075 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-13 20:06:05.011085 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-13 20:06:05.011096 | orchestrator | 2025-05-13 20:06:05.011107 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-13 20:06:05.011117 | orchestrator | Tuesday 13 May 2025 19:59:50 +0000 (0:00:02.724) 0:00:09.767 *********** 2025-05-13 20:06:05.011128 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-13 20:06:05.011139 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-13 20:06:05.011149 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-13 20:06:05.011168 | orchestrator | 2025-05-13 20:06:05.011179 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-13 20:06:05.011189 | orchestrator | Tuesday 13 May 2025 19:59:51 +0000 (0:00:00.909) 0:00:10.677 *********** 2025-05-13 20:06:05.011199 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-13 20:06:05.011210 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-13 20:06:05.011220 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-13 20:06:05.011230 | orchestrator | 2025-05-13 20:06:05.011241 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-13 20:06:05.011251 | orchestrator | Tuesday 13 May 2025 19:59:53 +0000 (0:00:01.923) 0:00:12.601 *********** 2025-05-13 20:06:05.011262 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-05-13 20:06:05.011272 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.011299 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-05-13 20:06:05.011316 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.011331 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-05-13 20:06:05.011345 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.011359 | orchestrator | 2025-05-13 20:06:05.011373 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-05-13 20:06:05.011386 | orchestrator | Tuesday 13 May 2025 19:59:53 +0000 (0:00:00.808) 0:00:13.410 *********** 2025-05-13 20:06:05.011405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-13 20:06:05.011428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-13 20:06:05.011452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-13 20:06:05.011564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 20:06:05.011584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 20:06:05.011604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 20:06:05.011615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-13 20:06:05.011625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-13 20:06:05.011634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-13 20:06:05.011643 | orchestrator | 2025-05-13 20:06:05.011652 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-05-13 20:06:05.011779 | orchestrator | Tuesday 13 May 2025 19:59:56 +0000 (0:00:02.469) 0:00:15.879 *********** 2025-05-13 20:06:05.011811 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.011820 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.011829 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.011838 | orchestrator | 2025-05-13 20:06:05.011847 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-05-13 20:06:05.011855 | orchestrator | Tuesday 13 May 2025 19:59:57 +0000 (0:00:01.230) 0:00:17.110 *********** 2025-05-13 20:06:05.011864 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-05-13 20:06:05.011873 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-05-13 20:06:05.011881 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-05-13 20:06:05.011895 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-05-13 20:06:05.011904 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-05-13 20:06:05.011913 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-05-13 20:06:05.011930 | orchestrator | 2025-05-13 20:06:05.011939 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-05-13 20:06:05.011948 | orchestrator | Tuesday 13 May 2025 20:00:00 +0000 (0:00:03.000) 0:00:20.110 *********** 2025-05-13 20:06:05.011956 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.011965 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.011973 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.011982 | orchestrator | 2025-05-13 20:06:05.011991 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-05-13 20:06:05.012000 | orchestrator | Tuesday 13 May 2025 20:00:02 +0000 (0:00:01.743) 0:00:21.854 *********** 2025-05-13 20:06:05.012008 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:06:05.012017 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:06:05.012025 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:06:05.012034 | orchestrator | 2025-05-13 20:06:05.012043 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-05-13 20:06:05.012051 | orchestrator | Tuesday 13 May 2025 20:00:04 +0000 (0:00:02.093) 0:00:23.948 *********** 2025-05-13 20:06:05.012060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.012087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.012098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.012108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__135ea44d362ad9dd832bcfcb0404d67c150ca990', '__omit_place_holder__135ea44d362ad9dd832bcfcb0404d67c150ca990'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-13 20:06:05.012118 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.012128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.012148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.012158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.012167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__135ea44d362ad9dd832bcfcb0404d67c150ca990', '__omit_place_holder__135ea44d362ad9dd832bcfcb0404d67c150ca990'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-13 20:06:05.012176 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.012193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.012203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.012212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.012227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__135ea44d362ad9dd832bcfcb0404d67c150ca990', '__omit_place_holder__135ea44d362ad9dd832bcfcb0404d67c150ca990'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-13 20:06:05.012236 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.012245 | orchestrator | 2025-05-13 20:06:05.012253 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-05-13 20:06:05.012262 | orchestrator | Tuesday 13 May 2025 20:00:06 +0000 (0:00:01.910) 0:00:25.858 *********** 2025-05-13 20:06:05.012271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-13 20:06:05.012281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-13 20:06:05.012439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-13 20:06:05.012519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 20:06:05.012530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.012637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__135ea44d362ad9dd832bcfcb0404d67c150ca990', '__omit_place_holder__135ea44d362ad9dd832bcfcb0404d67c150ca990'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-13 20:06:05.012652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 20:06:05.012661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.012670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__135ea44d362ad9dd832bcfcb0404d67c150ca990', '__omit_place_holder__135ea44d362ad9dd832bcfcb0404d67c150ca990'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-13 20:06:05.012689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 20:06:05.012699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.012714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__135ea44d362ad9dd832bcfcb0404d67c150ca990', '__omit_place_holder__135ea44d362ad9dd832bcfcb0404d67c150ca990'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-13 20:06:05.012723 | orchestrator | 2025-05-13 20:06:05.012732 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-05-13 20:06:05.012741 | orchestrator | Tuesday 13 May 2025 20:00:10 +0000 (0:00:03.834) 0:00:29.693 *********** 2025-05-13 20:06:05.012755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-13 20:06:05.012765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-13 20:06:05.012774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-13 20:06:05.012789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 20:06:05.012799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 20:06:05.012814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 20:06:05.012823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-13 20:06:05.012836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-13 20:06:05.012849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-13 20:06:05.012866 | orchestrator | 2025-05-13 20:06:05.012881 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-05-13 20:06:05.012895 | orchestrator | Tuesday 13 May 2025 20:00:13 +0000 (0:00:03.438) 0:00:33.131 *********** 2025-05-13 20:06:05.012910 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-13 20:06:05.012925 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-13 20:06:05.012938 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-13 20:06:05.012952 | orchestrator | 2025-05-13 20:06:05.012967 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-05-13 20:06:05.012981 | orchestrator | Tuesday 13 May 2025 20:00:15 +0000 (0:00:01.889) 0:00:35.021 *********** 2025-05-13 20:06:05.012996 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-13 20:06:05.013032 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-13 20:06:05.013049 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-13 20:06:05.013083 | orchestrator | 2025-05-13 20:06:05.013930 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-05-13 20:06:05.013958 | orchestrator | Tuesday 13 May 2025 20:00:22 +0000 (0:00:06.542) 0:00:41.563 *********** 2025-05-13 20:06:05.013978 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.013988 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.013997 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.014183 | orchestrator | 2025-05-13 20:06:05.014197 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-05-13 20:06:05.014205 | orchestrator | Tuesday 13 May 2025 20:00:22 +0000 (0:00:00.660) 0:00:42.224 *********** 2025-05-13 20:06:05.014213 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-13 20:06:05.014235 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-13 20:06:05.014244 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-13 20:06:05.014252 | orchestrator | 2025-05-13 20:06:05.014261 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-05-13 20:06:05.014269 | orchestrator | Tuesday 13 May 2025 20:00:25 +0000 (0:00:02.471) 0:00:44.696 *********** 2025-05-13 20:06:05.014278 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-13 20:06:05.014286 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-13 20:06:05.014295 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-13 20:06:05.014303 | orchestrator | 2025-05-13 20:06:05.014311 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-05-13 20:06:05.014319 | orchestrator | Tuesday 13 May 2025 20:00:27 +0000 (0:00:01.993) 0:00:46.690 *********** 2025-05-13 20:06:05.014329 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-05-13 20:06:05.014337 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-05-13 20:06:05.014345 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-05-13 20:06:05.014354 | orchestrator | 2025-05-13 20:06:05.014362 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-05-13 20:06:05.014371 | orchestrator | Tuesday 13 May 2025 20:00:29 +0000 (0:00:01.811) 0:00:48.501 *********** 2025-05-13 20:06:05.014379 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-05-13 20:06:05.014387 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-05-13 20:06:05.014396 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-05-13 20:06:05.014404 | orchestrator | 2025-05-13 20:06:05.014412 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-13 20:06:05.014427 | orchestrator | Tuesday 13 May 2025 20:00:30 +0000 (0:00:01.636) 0:00:50.137 *********** 2025-05-13 20:06:05.014436 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:06:05.014444 | orchestrator | 2025-05-13 20:06:05.014452 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-05-13 20:06:05.014480 | orchestrator | Tuesday 13 May 2025 20:00:31 +0000 (0:00:01.150) 0:00:51.288 *********** 2025-05-13 20:06:05.014489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-13 20:06:05.014499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-13 20:06:05.014527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-13 20:06:05.014538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 20:06:05.014548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 20:06:05.014562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 20:06:05.014591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-13 20:06:05.014602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-13 20:06:05.014628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-13 20:06:05.014652 | orchestrator | 2025-05-13 20:06:05.014662 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-05-13 20:06:05.014672 | orchestrator | Tuesday 13 May 2025 20:00:35 +0000 (0:00:03.385) 0:00:54.673 *********** 2025-05-13 20:06:05.014689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.014699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.014708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.014717 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.014732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.014742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.014761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.014771 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.014781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.014796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.014805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.014813 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.014821 | orchestrator | 2025-05-13 20:06:05.014829 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-05-13 20:06:05.014837 | orchestrator | Tuesday 13 May 2025 20:00:36 +0000 (0:00:00.981) 0:00:55.655 *********** 2025-05-13 20:06:05.014845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.014858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.014872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.014880 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.014888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.014902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.014910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.014918 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.014927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.014935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.014948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.014962 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.014970 | orchestrator | 2025-05-13 20:06:05.014978 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-05-13 20:06:05.014986 | orchestrator | Tuesday 13 May 2025 20:00:37 +0000 (0:00:01.388) 0:00:57.043 *********** 2025-05-13 20:06:05.014994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.015007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.015015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.015023 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.015031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.015040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.015052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.015066 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.015074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.015082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.015095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.015103 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.015111 | orchestrator | 2025-05-13 20:06:05.015119 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-05-13 20:06:05.015127 | orchestrator | Tuesday 13 May 2025 20:00:38 +0000 (0:00:00.577) 0:00:57.620 *********** 2025-05-13 20:06:05.015135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.015144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.015157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.015166 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.015177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.015186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.015195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.015203 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.015215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.015224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.015233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.015246 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.015254 | orchestrator | 2025-05-13 20:06:05.015262 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-05-13 20:06:05.015270 | orchestrator | Tuesday 13 May 2025 20:00:38 +0000 (0:00:00.776) 0:00:58.396 *********** 2025-05-13 20:06:05.015283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.015291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.015300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.015308 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.015321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.015330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.015338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.015354 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.015362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.015375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.015383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.015392 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.015399 | orchestrator | 2025-05-13 20:06:05.015407 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-05-13 20:06:05.015415 | orchestrator | Tuesday 13 May 2025 20:00:40 +0000 (0:00:01.229) 0:00:59.626 *********** 2025-05-13 20:06:05.015423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.015436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.015445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.015458 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.015479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.015492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.015501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.015509 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.015517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.015529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.015538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.015554 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.015562 | orchestrator | 2025-05-13 20:06:05.015570 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-05-13 20:06:05.015578 | orchestrator | Tuesday 13 May 2025 20:00:40 +0000 (0:00:00.597) 0:01:00.223 *********** 2025-05-13 20:06:05.015586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.015594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.015606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.015615 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.015623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.015631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.015647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.015660 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.015668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.015677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.015685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.015693 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.015701 | orchestrator | 2025-05-13 20:06:05.015713 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-05-13 20:06:05.015722 | orchestrator | Tuesday 13 May 2025 20:00:41 +0000 (0:00:00.834) 0:01:01.057 *********** 2025-05-13 20:06:05.015730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.015739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.015747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.015760 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.015773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.015782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.015790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.015798 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.015811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-13 20:06:05.015820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-13 20:06:05.015828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-13 20:06:05.015836 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.015849 | orchestrator | 2025-05-13 20:06:05.015857 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-05-13 20:06:05.015865 | orchestrator | Tuesday 13 May 2025 20:00:43 +0000 (0:00:01.995) 0:01:03.053 *********** 2025-05-13 20:06:05.015873 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-13 20:06:05.015881 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-13 20:06:05.015894 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-13 20:06:05.015902 | orchestrator | 2025-05-13 20:06:05.015910 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-05-13 20:06:05.015918 | orchestrator | Tuesday 13 May 2025 20:00:45 +0000 (0:00:02.187) 0:01:05.241 *********** 2025-05-13 20:06:05.015926 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-13 20:06:05.015933 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-13 20:06:05.015941 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-13 20:06:05.015949 | orchestrator | 2025-05-13 20:06:05.015957 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-05-13 20:06:05.015965 | orchestrator | Tuesday 13 May 2025 20:00:47 +0000 (0:00:01.397) 0:01:06.639 *********** 2025-05-13 20:06:05.015973 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-13 20:06:05.015981 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-13 20:06:05.015988 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-13 20:06:05.015996 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-13 20:06:05.016004 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.016012 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-13 20:06:05.016020 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.016028 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-13 20:06:05.016036 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.016044 | orchestrator | 2025-05-13 20:06:05.016052 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-05-13 20:06:05.016059 | orchestrator | Tuesday 13 May 2025 20:00:48 +0000 (0:00:01.017) 0:01:07.656 *********** 2025-05-13 20:06:05.016072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-13 20:06:05.016080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-13 20:06:05.016094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-13 20:06:05.016106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 20:06:05.016115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 20:06:05.016123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-13 20:06:05.016132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-13 20:06:05.016140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-13 20:06:05.016148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-13 20:06:05.016161 | orchestrator | 2025-05-13 20:06:05.016170 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-05-13 20:06:05.016178 | orchestrator | Tuesday 13 May 2025 20:00:50 +0000 (0:00:02.649) 0:01:10.306 *********** 2025-05-13 20:06:05.016186 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:06:05.016194 | orchestrator | 2025-05-13 20:06:05.016202 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-05-13 20:06:05.016209 | orchestrator | Tuesday 13 May 2025 20:00:51 +0000 (0:00:00.881) 0:01:11.188 *********** 2025-05-13 20:06:05.016218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-13 20:06:05.016233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-13 20:06:05.016242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-13 20:06:05.016250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.016309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-13 20:06:05.016331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.016339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.017275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.017306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-13 20:06:05.017315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-13 20:06:05.017329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.017345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.017352 | orchestrator | 2025-05-13 20:06:05.017359 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-05-13 20:06:05.017367 | orchestrator | Tuesday 13 May 2025 20:00:55 +0000 (0:00:03.772) 0:01:14.961 *********** 2025-05-13 20:06:05.017374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-13 20:06:05.017389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-13 20:06:05.017396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.017403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.017410 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.017421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-13 20:06:05.017433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-13 20:06:05.017440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.017447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.017476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-13 20:06:05.017484 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.017492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-13 20:06:05.017498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.017513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.017521 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.017528 | orchestrator | 2025-05-13 20:06:05.017534 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-05-13 20:06:05.017541 | orchestrator | Tuesday 13 May 2025 20:00:56 +0000 (0:00:00.776) 0:01:15.737 *********** 2025-05-13 20:06:05.017548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-13 20:06:05.017557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-13 20:06:05.017565 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.017572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-13 20:06:05.017580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-13 20:06:05.017586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-13 20:06:05.017597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-13 20:06:05.017604 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.017612 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.017623 | orchestrator | 2025-05-13 20:06:05.017640 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-05-13 20:06:05.017651 | orchestrator | Tuesday 13 May 2025 20:00:57 +0000 (0:00:01.136) 0:01:16.874 *********** 2025-05-13 20:06:05.017662 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.017672 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.017683 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.017693 | orchestrator | 2025-05-13 20:06:05.017702 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-05-13 20:06:05.017712 | orchestrator | Tuesday 13 May 2025 20:00:58 +0000 (0:00:01.321) 0:01:18.196 *********** 2025-05-13 20:06:05.017722 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.017732 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.017742 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.017799 | orchestrator | 2025-05-13 20:06:05.017813 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-05-13 20:06:05.017825 | orchestrator | Tuesday 13 May 2025 20:01:00 +0000 (0:00:02.162) 0:01:20.358 *********** 2025-05-13 20:06:05.017833 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:06:05.017846 | orchestrator | 2025-05-13 20:06:05.017853 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-05-13 20:06:05.017860 | orchestrator | Tuesday 13 May 2025 20:01:01 +0000 (0:00:00.752) 0:01:21.111 *********** 2025-05-13 20:06:05.017915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 20:06:05.017929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 20:06:05.017939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.017953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.017973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.017992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.018005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 20:06:05.018058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.018069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.018077 | orchestrator | 2025-05-13 20:06:05.018085 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-05-13 20:06:05.018093 | orchestrator | Tuesday 13 May 2025 20:01:07 +0000 (0:00:05.928) 0:01:27.039 *********** 2025-05-13 20:06:05.018107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-13 20:06:05.018121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.018129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.018137 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.018149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-13 20:06:05.018158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.018166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.018174 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.018207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-13 20:06:05.018218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.018225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.018232 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.018239 | orchestrator | 2025-05-13 20:06:05.018245 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-05-13 20:06:05.018252 | orchestrator | Tuesday 13 May 2025 20:01:08 +0000 (0:00:00.568) 0:01:27.608 *********** 2025-05-13 20:06:05.018262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-13 20:06:05.018270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-13 20:06:05.018277 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.018284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-13 20:06:05.018291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-13 20:06:05.018297 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.018304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-13 20:06:05.018311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-13 20:06:05.018317 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.018324 | orchestrator | 2025-05-13 20:06:05.018331 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-05-13 20:06:05.018345 | orchestrator | Tuesday 13 May 2025 20:01:08 +0000 (0:00:00.851) 0:01:28.459 *********** 2025-05-13 20:06:05.018352 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.018359 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.018365 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.018372 | orchestrator | 2025-05-13 20:06:05.018378 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-05-13 20:06:05.018385 | orchestrator | Tuesday 13 May 2025 20:01:10 +0000 (0:00:01.606) 0:01:30.065 *********** 2025-05-13 20:06:05.018391 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.018398 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.018405 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.018411 | orchestrator | 2025-05-13 20:06:05.018422 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-05-13 20:06:05.018429 | orchestrator | Tuesday 13 May 2025 20:01:12 +0000 (0:00:02.271) 0:01:32.337 *********** 2025-05-13 20:06:05.018436 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.018442 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.018449 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.018455 | orchestrator | 2025-05-13 20:06:05.018481 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-05-13 20:06:05.018493 | orchestrator | Tuesday 13 May 2025 20:01:13 +0000 (0:00:00.353) 0:01:32.690 *********** 2025-05-13 20:06:05.018500 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:06:05.018507 | orchestrator | 2025-05-13 20:06:05.018514 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-05-13 20:06:05.018520 | orchestrator | Tuesday 13 May 2025 20:01:13 +0000 (0:00:00.646) 0:01:33.337 *********** 2025-05-13 20:06:05.018527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-13 20:06:05.018539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-13 20:06:05.018546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-13 20:06:05.018557 | orchestrator | 2025-05-13 20:06:05.018564 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-05-13 20:06:05.018570 | orchestrator | Tuesday 13 May 2025 20:01:17 +0000 (0:00:03.384) 0:01:36.721 *********** 2025-05-13 20:06:05.018590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-13 20:06:05.018597 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.018604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-13 20:06:05.018611 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.018618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-13 20:06:05.018625 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.018631 | orchestrator | 2025-05-13 20:06:05.018638 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-05-13 20:06:05.018645 | orchestrator | Tuesday 13 May 2025 20:01:19 +0000 (0:00:02.348) 0:01:39.070 *********** 2025-05-13 20:06:05.018656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-13 20:06:05.018664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-13 20:06:05.018678 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.018685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-13 20:06:05.018692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-13 20:06:05.018699 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.018710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-13 20:06:05.018718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-13 20:06:05.018724 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.018731 | orchestrator | 2025-05-13 20:06:05.018738 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-05-13 20:06:05.018744 | orchestrator | Tuesday 13 May 2025 20:01:21 +0000 (0:00:02.268) 0:01:41.339 *********** 2025-05-13 20:06:05.018751 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.018758 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.018765 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.018771 | orchestrator | 2025-05-13 20:06:05.018778 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-05-13 20:06:05.018785 | orchestrator | Tuesday 13 May 2025 20:01:22 +0000 (0:00:00.697) 0:01:42.036 *********** 2025-05-13 20:06:05.018791 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.018798 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.018804 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.018841 | orchestrator | 2025-05-13 20:06:05.018849 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-05-13 20:06:05.018856 | orchestrator | Tuesday 13 May 2025 20:01:23 +0000 (0:00:01.004) 0:01:43.040 *********** 2025-05-13 20:06:05.018863 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:06:05.018869 | orchestrator | 2025-05-13 20:06:05.018893 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-05-13 20:06:05.018900 | orchestrator | Tuesday 13 May 2025 20:01:24 +0000 (0:00:00.792) 0:01:43.833 *********** 2025-05-13 20:06:05.018910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 20:06:05.018923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.018955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.018973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.018987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 20:06:05.018999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 20:06:05.019051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019080 | orchestrator | 2025-05-13 20:06:05.019086 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-05-13 20:06:05.019093 | orchestrator | Tuesday 13 May 2025 20:01:28 +0000 (0:00:04.249) 0:01:48.083 *********** 2025-05-13 20:06:05.019100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-13 20:06:05.019108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019138 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.019148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-13 20:06:05.019155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-13 20:06:05.019174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019199 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.019209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019223 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.019230 | orchestrator | 2025-05-13 20:06:05.019237 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-05-13 20:06:05.019244 | orchestrator | Tuesday 13 May 2025 20:01:30 +0000 (0:00:01.560) 0:01:49.643 *********** 2025-05-13 20:06:05.019251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-13 20:06:05.019262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-13 20:06:05.019269 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.019276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-13 20:06:05.019283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-13 20:06:05.019297 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.019304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-13 20:06:05.019311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-13 20:06:05.019318 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.019325 | orchestrator | 2025-05-13 20:06:05.019331 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-05-13 20:06:05.019338 | orchestrator | Tuesday 13 May 2025 20:01:31 +0000 (0:00:01.014) 0:01:50.657 *********** 2025-05-13 20:06:05.019344 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.019351 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.019357 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.019364 | orchestrator | 2025-05-13 20:06:05.019371 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-05-13 20:06:05.019377 | orchestrator | Tuesday 13 May 2025 20:01:33 +0000 (0:00:02.141) 0:01:52.799 *********** 2025-05-13 20:06:05.019384 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.019390 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.019397 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.019403 | orchestrator | 2025-05-13 20:06:05.019410 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-05-13 20:06:05.019416 | orchestrator | Tuesday 13 May 2025 20:01:35 +0000 (0:00:02.333) 0:01:55.133 *********** 2025-05-13 20:06:05.019423 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.019430 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.019436 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.019443 | orchestrator | 2025-05-13 20:06:05.019449 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-05-13 20:06:05.019456 | orchestrator | Tuesday 13 May 2025 20:01:36 +0000 (0:00:00.608) 0:01:55.741 *********** 2025-05-13 20:06:05.019512 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.019520 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.019527 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.019534 | orchestrator | 2025-05-13 20:06:05.019541 | orchestrator | TASK [include_role : designate] ************************************************ 2025-05-13 20:06:05.019547 | orchestrator | Tuesday 13 May 2025 20:01:36 +0000 (0:00:00.404) 0:01:56.146 *********** 2025-05-13 20:06:05.019554 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:06:05.019561 | orchestrator | 2025-05-13 20:06:05.019567 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-05-13 20:06:05.019574 | orchestrator | Tuesday 13 May 2025 20:01:37 +0000 (0:00:00.972) 0:01:57.118 *********** 2025-05-13 20:06:05.019581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 20:06:05.019594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 20:06:05.019606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 20:06:05.019662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 20:06:05.019669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 20:06:05.019724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 20:06:05.019731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019775 | orchestrator | 2025-05-13 20:06:05.019782 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-05-13 20:06:05.019789 | orchestrator | Tuesday 13 May 2025 20:01:44 +0000 (0:00:06.703) 0:02:03.821 *********** 2025-05-13 20:06:05.019801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 20:06:05.019808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 20:06:05.019815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019863 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.019870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 20:06:05.019878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 20:06:05.019885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 20:06:05.019927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.019933 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.019993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 20:06:05.020016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.020030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.020037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.020050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.020057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.020063 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.020069 | orchestrator | 2025-05-13 20:06:05.020076 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-05-13 20:06:05.020082 | orchestrator | Tuesday 13 May 2025 20:01:45 +0000 (0:00:00.694) 0:02:04.516 *********** 2025-05-13 20:06:05.020089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-13 20:06:05.020095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-13 20:06:05.020103 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.020109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-13 20:06:05.020116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-13 20:06:05.020126 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.020135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-13 20:06:05.020142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-13 20:06:05.020148 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.020154 | orchestrator | 2025-05-13 20:06:05.020160 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-05-13 20:06:05.020166 | orchestrator | Tuesday 13 May 2025 20:01:45 +0000 (0:00:00.860) 0:02:05.376 *********** 2025-05-13 20:06:05.020173 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.020179 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.020185 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.020191 | orchestrator | 2025-05-13 20:06:05.020197 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-05-13 20:06:05.020204 | orchestrator | Tuesday 13 May 2025 20:01:47 +0000 (0:00:01.736) 0:02:07.113 *********** 2025-05-13 20:06:05.020210 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.020216 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.020222 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.020228 | orchestrator | 2025-05-13 20:06:05.020234 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-05-13 20:06:05.020240 | orchestrator | Tuesday 13 May 2025 20:01:49 +0000 (0:00:02.126) 0:02:09.239 *********** 2025-05-13 20:06:05.020247 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.020253 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.020259 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.020265 | orchestrator | 2025-05-13 20:06:05.020271 | orchestrator | TASK [include_role : glance] *************************************************** 2025-05-13 20:06:05.020277 | orchestrator | Tuesday 13 May 2025 20:01:50 +0000 (0:00:00.327) 0:02:09.566 *********** 2025-05-13 20:06:05.020284 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:06:05.020290 | orchestrator | 2025-05-13 20:06:05.020296 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-05-13 20:06:05.020302 | orchestrator | Tuesday 13 May 2025 20:01:50 +0000 (0:00:00.845) 0:02:10.412 *********** 2025-05-13 20:06:05.020316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 20:06:05.020333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-13 20:06:05.020346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 20:06:05.020563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-13 20:06:05.020582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 20:06:05.020597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-13 20:06:05.020608 | orchestrator | 2025-05-13 20:06:05.020615 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-05-13 20:06:05.020621 | orchestrator | Tuesday 13 May 2025 20:01:55 +0000 (0:00:04.151) 0:02:14.564 *********** 2025-05-13 20:06:05.020628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-13 20:06:05.020636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-13 20:06:05.020647 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.020661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-13 20:06:05.020669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-13 20:06:05.020680 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.020697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-13 20:06:05.020705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-13 20:06:05.020716 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.020722 | orchestrator | 2025-05-13 20:06:05.020728 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-05-13 20:06:05.020735 | orchestrator | Tuesday 13 May 2025 20:01:58 +0000 (0:00:03.009) 0:02:17.573 *********** 2025-05-13 20:06:05.020741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-13 20:06:05.020754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-13 20:06:05.020761 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.020768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-13 20:06:05.020775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-13 20:06:05.020781 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.020788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-13 20:06:05.020794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-13 20:06:05.020801 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.020811 | orchestrator | 2025-05-13 20:06:05.020817 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-05-13 20:06:05.020823 | orchestrator | Tuesday 13 May 2025 20:02:01 +0000 (0:00:03.200) 0:02:20.773 *********** 2025-05-13 20:06:05.020829 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.020836 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.020842 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.020848 | orchestrator | 2025-05-13 20:06:05.020854 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-05-13 20:06:05.020860 | orchestrator | Tuesday 13 May 2025 20:02:02 +0000 (0:00:01.541) 0:02:22.315 *********** 2025-05-13 20:06:05.020866 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.020873 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.020879 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.020885 | orchestrator | 2025-05-13 20:06:05.020891 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-05-13 20:06:05.020897 | orchestrator | Tuesday 13 May 2025 20:02:04 +0000 (0:00:01.975) 0:02:24.290 *********** 2025-05-13 20:06:05.020904 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.020910 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.020916 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.020922 | orchestrator | 2025-05-13 20:06:05.020928 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-05-13 20:06:05.020934 | orchestrator | Tuesday 13 May 2025 20:02:05 +0000 (0:00:00.304) 0:02:24.595 *********** 2025-05-13 20:06:05.020945 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:06:05.020955 | orchestrator | 2025-05-13 20:06:05.020965 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-05-13 20:06:05.020975 | orchestrator | Tuesday 13 May 2025 20:02:05 +0000 (0:00:00.859) 0:02:25.455 *********** 2025-05-13 20:06:05.020995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 20:06:05.021008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 20:06:05.021020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 20:06:05.021034 | orchestrator | 2025-05-13 20:06:05.021040 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-05-13 20:06:05.021047 | orchestrator | Tuesday 13 May 2025 20:02:09 +0000 (0:00:03.345) 0:02:28.800 *********** 2025-05-13 20:06:05.021053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-13 20:06:05.021059 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.021066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-13 20:06:05.021072 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.021079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-13 20:06:05.021085 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.021091 | orchestrator | 2025-05-13 20:06:05.021097 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-05-13 20:06:05.021164 | orchestrator | Tuesday 13 May 2025 20:02:09 +0000 (0:00:00.393) 0:02:29.193 *********** 2025-05-13 20:06:05.021179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-13 20:06:05.021186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-13 20:06:05.021195 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.021202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-13 20:06:05.021210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-13 20:06:05.021218 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.021225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-13 20:06:05.021232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-13 20:06:05.021245 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.021252 | orchestrator | 2025-05-13 20:06:05.021260 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-05-13 20:06:05.021267 | orchestrator | Tuesday 13 May 2025 20:02:10 +0000 (0:00:00.655) 0:02:29.849 *********** 2025-05-13 20:06:05.021275 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.021282 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.021290 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.021296 | orchestrator | 2025-05-13 20:06:05.021303 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-05-13 20:06:05.021309 | orchestrator | Tuesday 13 May 2025 20:02:11 +0000 (0:00:01.586) 0:02:31.436 *********** 2025-05-13 20:06:05.021315 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.021321 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.021328 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.021334 | orchestrator | 2025-05-13 20:06:05.021340 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-05-13 20:06:05.021346 | orchestrator | Tuesday 13 May 2025 20:02:13 +0000 (0:00:01.920) 0:02:33.356 *********** 2025-05-13 20:06:05.021353 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.021359 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.021365 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.021371 | orchestrator | 2025-05-13 20:06:05.021377 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-05-13 20:06:05.021383 | orchestrator | Tuesday 13 May 2025 20:02:14 +0000 (0:00:00.358) 0:02:33.715 *********** 2025-05-13 20:06:05.021389 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:06:05.021396 | orchestrator | 2025-05-13 20:06:05.021402 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-05-13 20:06:05.021408 | orchestrator | Tuesday 13 May 2025 20:02:15 +0000 (0:00:00.835) 0:02:34.551 *********** 2025-05-13 20:06:05.021424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-13 20:06:05.021529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-13 20:06:05.021546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-13 20:06:05.021558 | orchestrator | 2025-05-13 20:06:05.021564 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-05-13 20:06:05.021570 | orchestrator | Tuesday 13 May 2025 20:02:19 +0000 (0:00:04.193) 0:02:38.744 *********** 2025-05-13 20:06:05.021577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-13 20:06:05.021584 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.021599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-13 20:06:05.021610 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.021617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-13 20:06:05.021624 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.021630 | orchestrator | 2025-05-13 20:06:05.021636 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-05-13 20:06:05.021642 | orchestrator | Tuesday 13 May 2025 20:02:20 +0000 (0:00:01.129) 0:02:39.873 *********** 2025-05-13 20:06:05.021649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-13 20:06:05.021668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-13 20:06:05.021677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-13 20:06:05.021684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-13 20:06:05.021692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-13 20:06:05.021699 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.021705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-13 20:06:05.021712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-13 20:06:05.021718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-13 20:06:05.021725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-13 20:06:05.021731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-13 20:06:05.021738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-13 20:06:05.021744 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.021750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-13 20:06:05.021757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-13 20:06:05.021770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-13 20:06:05.021776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-13 20:06:05.021782 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.021788 | orchestrator | 2025-05-13 20:06:05.021801 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-05-13 20:06:05.021807 | orchestrator | Tuesday 13 May 2025 20:02:21 +0000 (0:00:01.053) 0:02:40.927 *********** 2025-05-13 20:06:05.021814 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.021820 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.021826 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.021832 | orchestrator | 2025-05-13 20:06:05.021838 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-05-13 20:06:05.021844 | orchestrator | Tuesday 13 May 2025 20:02:23 +0000 (0:00:01.731) 0:02:42.659 *********** 2025-05-13 20:06:05.021850 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.021857 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.021863 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.021869 | orchestrator | 2025-05-13 20:06:05.021875 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-05-13 20:06:05.021881 | orchestrator | Tuesday 13 May 2025 20:02:25 +0000 (0:00:02.159) 0:02:44.818 *********** 2025-05-13 20:06:05.021887 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.021893 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.021900 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.021906 | orchestrator | 2025-05-13 20:06:05.021912 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-05-13 20:06:05.021918 | orchestrator | Tuesday 13 May 2025 20:02:25 +0000 (0:00:00.356) 0:02:45.175 *********** 2025-05-13 20:06:05.021924 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.021930 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.021937 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.021947 | orchestrator | 2025-05-13 20:06:05.021958 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-05-13 20:06:05.021968 | orchestrator | Tuesday 13 May 2025 20:02:26 +0000 (0:00:00.372) 0:02:45.548 *********** 2025-05-13 20:06:05.021978 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:06:05.021988 | orchestrator | 2025-05-13 20:06:05.021997 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-05-13 20:06:05.022008 | orchestrator | Tuesday 13 May 2025 20:02:27 +0000 (0:00:01.113) 0:02:46.661 *********** 2025-05-13 20:06:05.022043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 20:06:05.022069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 20:06:05.022077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-13 20:06:05.022095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 20:06:05.022103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 20:06:05.022111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 20:06:05.022145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-13 20:06:05.022153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 20:06:05.022160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-13 20:06:05.022166 | orchestrator | 2025-05-13 20:06:05.022180 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-05-13 20:06:05.022187 | orchestrator | Tuesday 13 May 2025 20:02:30 +0000 (0:00:03.755) 0:02:50.416 *********** 2025-05-13 20:06:05.022215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-13 20:06:05.022223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 20:06:05.022230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-13 20:06:05.022243 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.022297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-13 20:06:05.022304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 20:06:05.022319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-13 20:06:05.022326 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.022333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-13 20:06:05.022340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 20:06:05.022351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-13 20:06:05.022357 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.022364 | orchestrator | 2025-05-13 20:06:05.022370 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-05-13 20:06:05.022376 | orchestrator | Tuesday 13 May 2025 20:02:31 +0000 (0:00:00.610) 0:02:51.027 *********** 2025-05-13 20:06:05.022383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-13 20:06:05.022391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-13 20:06:05.022397 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.022404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-13 20:06:05.022417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-13 20:06:05.022424 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.022431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-13 20:06:05.022437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-13 20:06:05.022444 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.022450 | orchestrator | 2025-05-13 20:06:05.022456 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-05-13 20:06:05.022481 | orchestrator | Tuesday 13 May 2025 20:02:32 +0000 (0:00:01.052) 0:02:52.079 *********** 2025-05-13 20:06:05.022492 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.022503 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.022513 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.022522 | orchestrator | 2025-05-13 20:06:05.022533 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-05-13 20:06:05.022540 | orchestrator | Tuesday 13 May 2025 20:02:33 +0000 (0:00:01.273) 0:02:53.352 *********** 2025-05-13 20:06:05.022551 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.022557 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.022564 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.022570 | orchestrator | 2025-05-13 20:06:05.022576 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-05-13 20:06:05.022582 | orchestrator | Tuesday 13 May 2025 20:02:35 +0000 (0:00:02.041) 0:02:55.394 *********** 2025-05-13 20:06:05.022588 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.022594 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.022600 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.022606 | orchestrator | 2025-05-13 20:06:05.022612 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-05-13 20:06:05.022618 | orchestrator | Tuesday 13 May 2025 20:02:36 +0000 (0:00:00.311) 0:02:55.706 *********** 2025-05-13 20:06:05.022624 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:06:05.022630 | orchestrator | 2025-05-13 20:06:05.022636 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-05-13 20:06:05.022642 | orchestrator | Tuesday 13 May 2025 20:02:37 +0000 (0:00:01.311) 0:02:57.017 *********** 2025-05-13 20:06:05.022649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 20:06:05.022656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.022672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 20:06:05.022679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.022690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 20:06:05.022697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.022703 | orchestrator | 2025-05-13 20:06:05.022709 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-05-13 20:06:05.022715 | orchestrator | Tuesday 13 May 2025 20:02:41 +0000 (0:00:03.516) 0:03:00.534 *********** 2025-05-13 20:06:05.022722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-13 20:06:05.022735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.022759 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.022766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-13 20:06:05.022773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.022779 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.022786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-13 20:06:05.022793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.022799 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.022812 | orchestrator | 2025-05-13 20:06:05.022819 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-05-13 20:06:05.022825 | orchestrator | Tuesday 13 May 2025 20:02:41 +0000 (0:00:00.678) 0:03:01.213 *********** 2025-05-13 20:06:05.022871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-13 20:06:05.022882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-13 20:06:05.022889 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.022895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-13 20:06:05.022908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-13 20:06:05.022914 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.022921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-13 20:06:05.022927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-13 20:06:05.022950 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.022961 | orchestrator | 2025-05-13 20:06:05.022971 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-05-13 20:06:05.022981 | orchestrator | Tuesday 13 May 2025 20:02:43 +0000 (0:00:01.293) 0:03:02.506 *********** 2025-05-13 20:06:05.022992 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.023001 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.023013 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.023019 | orchestrator | 2025-05-13 20:06:05.023026 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-05-13 20:06:05.023038 | orchestrator | Tuesday 13 May 2025 20:02:44 +0000 (0:00:01.281) 0:03:03.788 *********** 2025-05-13 20:06:05.023047 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.023058 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.023067 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.023076 | orchestrator | 2025-05-13 20:06:05.023086 | orchestrator | TASK [include_role : manila] *************************************************** 2025-05-13 20:06:05.023096 | orchestrator | Tuesday 13 May 2025 20:02:46 +0000 (0:00:02.050) 0:03:05.838 *********** 2025-05-13 20:06:05.023107 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:06:05.023117 | orchestrator | 2025-05-13 20:06:05.023127 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-05-13 20:06:05.023137 | orchestrator | Tuesday 13 May 2025 20:02:47 +0000 (0:00:01.045) 0:03:06.884 *********** 2025-05-13 20:06:05.023146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-13 20:06:05.023153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.023175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.023182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.023188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-13 20:06:05.023195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.023202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.023208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.023222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-13 20:06:05.023229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.023236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.023278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.023290 | orchestrator | 2025-05-13 20:06:05.023297 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-05-13 20:06:05.023303 | orchestrator | Tuesday 13 May 2025 20:02:50 +0000 (0:00:03.536) 0:03:10.420 *********** 2025-05-13 20:06:05.023310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-13 20:06:05.023321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.023336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.023343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.023349 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.023356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-13 20:06:05.023362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.023369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.023380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.023386 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.023400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-13 20:06:05.023406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.023413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.023419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.023426 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.023432 | orchestrator | 2025-05-13 20:06:05.023438 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-05-13 20:06:05.023444 | orchestrator | Tuesday 13 May 2025 20:02:51 +0000 (0:00:00.806) 0:03:11.226 *********** 2025-05-13 20:06:05.023451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-13 20:06:05.023503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-13 20:06:05.023512 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.023519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-13 20:06:05.023525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-13 20:06:05.023531 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.023537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-13 20:06:05.023544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-13 20:06:05.023550 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.023556 | orchestrator | 2025-05-13 20:06:05.023562 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-05-13 20:06:05.023568 | orchestrator | Tuesday 13 May 2025 20:02:52 +0000 (0:00:00.844) 0:03:12.071 *********** 2025-05-13 20:06:05.023575 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.023581 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.023587 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.023593 | orchestrator | 2025-05-13 20:06:05.023599 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-05-13 20:06:05.023605 | orchestrator | Tuesday 13 May 2025 20:02:54 +0000 (0:00:01.613) 0:03:13.684 *********** 2025-05-13 20:06:05.023616 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.023627 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.023633 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.023639 | orchestrator | 2025-05-13 20:06:05.023645 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-05-13 20:06:05.023651 | orchestrator | Tuesday 13 May 2025 20:02:56 +0000 (0:00:02.167) 0:03:15.852 *********** 2025-05-13 20:06:05.023658 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:06:05.023664 | orchestrator | 2025-05-13 20:06:05.023670 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-05-13 20:06:05.023676 | orchestrator | Tuesday 13 May 2025 20:02:57 +0000 (0:00:01.055) 0:03:16.908 *********** 2025-05-13 20:06:05.023683 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-13 20:06:05.023689 | orchestrator | 2025-05-13 20:06:05.023695 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-05-13 20:06:05.023701 | orchestrator | Tuesday 13 May 2025 20:03:00 +0000 (0:00:03.012) 0:03:19.920 *********** 2025-05-13 20:06:05.023709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 20:06:05.023720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-13 20:06:05.023727 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.024063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 20:06:05.024079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-13 20:06:05.024092 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.024098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 20:06:05.024105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-13 20:06:05.024110 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.024116 | orchestrator | 2025-05-13 20:06:05.024122 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-05-13 20:06:05.024135 | orchestrator | Tuesday 13 May 2025 20:03:03 +0000 (0:00:02.602) 0:03:22.523 *********** 2025-05-13 20:06:05.024142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 20:06:05.024152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-13 20:06:05.024158 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.024171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 20:06:05.024178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-13 20:06:05.024184 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.024195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 20:06:05.024201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-13 20:06:05.024206 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.024212 | orchestrator | 2025-05-13 20:06:05.024217 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-05-13 20:06:05.024223 | orchestrator | Tuesday 13 May 2025 20:03:05 +0000 (0:00:02.217) 0:03:24.741 *********** 2025-05-13 20:06:05.024231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-13 20:06:05.024243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-13 20:06:05.024249 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.024260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-13 20:06:05.024266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-13 20:06:05.024272 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.024278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-13 20:06:05.024283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-13 20:06:05.024289 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.024295 | orchestrator | 2025-05-13 20:06:05.024300 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-05-13 20:06:05.024306 | orchestrator | Tuesday 13 May 2025 20:03:08 +0000 (0:00:02.765) 0:03:27.506 *********** 2025-05-13 20:06:05.024312 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.024317 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.024323 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.024329 | orchestrator | 2025-05-13 20:06:05.024335 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-05-13 20:06:05.024341 | orchestrator | Tuesday 13 May 2025 20:03:10 +0000 (0:00:02.052) 0:03:29.558 *********** 2025-05-13 20:06:05.024347 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.024353 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.024358 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.024364 | orchestrator | 2025-05-13 20:06:05.024369 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-05-13 20:06:05.024375 | orchestrator | Tuesday 13 May 2025 20:03:11 +0000 (0:00:01.393) 0:03:30.952 *********** 2025-05-13 20:06:05.024381 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.024387 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.024393 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.024398 | orchestrator | 2025-05-13 20:06:05.024404 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-05-13 20:06:05.024413 | orchestrator | Tuesday 13 May 2025 20:03:11 +0000 (0:00:00.327) 0:03:31.280 *********** 2025-05-13 20:06:05.024424 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:06:05.024430 | orchestrator | 2025-05-13 20:06:05.024435 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-05-13 20:06:05.024440 | orchestrator | Tuesday 13 May 2025 20:03:13 +0000 (0:00:01.212) 0:03:32.492 *********** 2025-05-13 20:06:05.024446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-13 20:06:05.024453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-13 20:06:05.024476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-13 20:06:05.024545 | orchestrator | 2025-05-13 20:06:05.024551 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-05-13 20:06:05.024594 | orchestrator | Tuesday 13 May 2025 20:03:14 +0000 (0:00:01.662) 0:03:34.155 *********** 2025-05-13 20:06:05.024603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-13 20:06:05.024609 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.024622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-13 20:06:05.024692 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.024701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-13 20:06:05.024708 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.024715 | orchestrator | 2025-05-13 20:06:05.024722 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-05-13 20:06:05.024728 | orchestrator | Tuesday 13 May 2025 20:03:15 +0000 (0:00:00.335) 0:03:34.491 *********** 2025-05-13 20:06:05.024735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-13 20:06:05.024742 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.024749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-13 20:06:05.024756 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.025523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-13 20:06:05.025545 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.025551 | orchestrator | 2025-05-13 20:06:05.025557 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-05-13 20:06:05.025563 | orchestrator | Tuesday 13 May 2025 20:03:15 +0000 (0:00:00.514) 0:03:35.005 *********** 2025-05-13 20:06:05.025569 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.025574 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.025580 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.025607 | orchestrator | 2025-05-13 20:06:05.025614 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-05-13 20:06:05.025619 | orchestrator | Tuesday 13 May 2025 20:03:16 +0000 (0:00:00.615) 0:03:35.620 *********** 2025-05-13 20:06:05.025625 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.025631 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.025637 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.025642 | orchestrator | 2025-05-13 20:06:05.025648 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-05-13 20:06:05.025653 | orchestrator | Tuesday 13 May 2025 20:03:17 +0000 (0:00:01.124) 0:03:36.745 *********** 2025-05-13 20:06:05.025659 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.025738 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.025865 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.025875 | orchestrator | 2025-05-13 20:06:05.025883 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-05-13 20:06:05.025892 | orchestrator | Tuesday 13 May 2025 20:03:17 +0000 (0:00:00.279) 0:03:37.024 *********** 2025-05-13 20:06:05.025901 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:06:05.025909 | orchestrator | 2025-05-13 20:06:05.025918 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-05-13 20:06:05.025926 | orchestrator | Tuesday 13 May 2025 20:03:18 +0000 (0:00:01.378) 0:03:38.403 *********** 2025-05-13 20:06:05.025968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 20:06:05.025980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.025989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.025999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.026008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-13 20:06:05.026231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.026310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 20:06:05.026352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 20:06:05.026361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 20:06:05.026367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.026380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.026386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:06:05.026409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.026416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.026422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.026428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-13 20:06:05.026437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-13 20:06:05.026443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 20:06:05.026481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.026492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.026498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 20:06:05.026505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-13 20:06:05.026515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 20:06:05.026520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:06:05.026526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.026549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.026555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:06:05.026561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.026570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-13 20:06:05.026576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 20:06:05.026581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 20:06:05.026603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.026610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.026616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-13 20:06:05.026625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.026631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:06:05.026651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.026657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.026663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-13 20:06:05.026675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.026715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 20:06:05.026721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 20:06:05.026727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.026779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:06:05.026788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.026798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-13 20:06:05.026804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 20:06:05.026810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.026815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-13 20:06:05.026942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:06:05.026955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.027114 | orchestrator | 2025-05-13 20:06:05.027121 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-05-13 20:06:05.027127 | orchestrator | Tuesday 13 May 2025 20:03:23 +0000 (0:00:04.177) 0:03:42.580 *********** 2025-05-13 20:06:05.027133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 20:06:05.027139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.027145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.027171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.027178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-13 20:06:05.027189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 20:06:05.027195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.027202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.027208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 20:06:05.027243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-2025-05-13 20:06:05 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:06:05.027251 | orchestrator | linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.027263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.027269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 20:06:05.027275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-13 20:06:05.027281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.027287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.027309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:06:05.027320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 20:06:05.027334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.027340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 20:06:05.027345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-13 20:06:05.027351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.027356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 20:06:05.027378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 20:06:05.027389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:06:05.027395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.027401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.027406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.027426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.027577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-13 20:06:05.027587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-13 20:06:05.027594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.027600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 20:06:05.027606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:06:05.027632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-13 20:06:05.027642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.027648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.027653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.027659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-13 20:06:05.027664 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.027669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 20:06:05.027688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:06:05.027863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 20:06:05.027871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.027876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.027881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:06:05.027886 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.027891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.027909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-13 20:06:05.027918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-13 20:06:05.027924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.027929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-13 20:06:05.027934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:06:05.027983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.028007 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.028016 | orchestrator | 2025-05-13 20:06:05.028024 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-05-13 20:06:05.028030 | orchestrator | Tuesday 13 May 2025 20:03:24 +0000 (0:00:01.519) 0:03:44.100 *********** 2025-05-13 20:06:05.028035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-13 20:06:05.028057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-13 20:06:05.028067 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.028072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-13 20:06:05.028086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-13 20:06:05.028128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-13 20:06:05.028134 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.028139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-13 20:06:05.028144 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.028149 | orchestrator | 2025-05-13 20:06:05.028154 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-05-13 20:06:05.028159 | orchestrator | Tuesday 13 May 2025 20:03:26 +0000 (0:00:02.169) 0:03:46.269 *********** 2025-05-13 20:06:05.028163 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.028184 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.028189 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.028195 | orchestrator | 2025-05-13 20:06:05.028200 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-05-13 20:06:05.028205 | orchestrator | Tuesday 13 May 2025 20:03:28 +0000 (0:00:01.394) 0:03:47.663 *********** 2025-05-13 20:06:05.028210 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.028216 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.028221 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.028226 | orchestrator | 2025-05-13 20:06:05.028231 | orchestrator | TASK [include_role : placement] ************************************************ 2025-05-13 20:06:05.028236 | orchestrator | Tuesday 13 May 2025 20:03:30 +0000 (0:00:02.118) 0:03:49.781 *********** 2025-05-13 20:06:05.028241 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:06:05.028246 | orchestrator | 2025-05-13 20:06:05.028252 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-05-13 20:06:05.028257 | orchestrator | Tuesday 13 May 2025 20:03:31 +0000 (0:00:01.184) 0:03:50.966 *********** 2025-05-13 20:06:05.028262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 20:06:05.028275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 20:06:05.028297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 20:06:05.028303 | orchestrator | 2025-05-13 20:06:05.028308 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-05-13 20:06:05.028313 | orchestrator | Tuesday 13 May 2025 20:03:34 +0000 (0:00:03.075) 0:03:54.042 *********** 2025-05-13 20:06:05.028319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-13 20:06:05.028325 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.028330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-13 20:06:05.028339 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.028345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-13 20:06:05.028350 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.028355 | orchestrator | 2025-05-13 20:06:05.028360 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-05-13 20:06:05.028365 | orchestrator | Tuesday 13 May 2025 20:03:35 +0000 (0:00:00.435) 0:03:54.478 *********** 2025-05-13 20:06:05.028371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-13 20:06:05.028376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-13 20:06:05.028396 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.028435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-13 20:06:05.028441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-13 20:06:05.028446 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.028451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-13 20:06:05.028456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-13 20:06:05.028478 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.028483 | orchestrator | 2025-05-13 20:06:05.028488 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-05-13 20:06:05.028493 | orchestrator | Tuesday 13 May 2025 20:03:35 +0000 (0:00:00.674) 0:03:55.152 *********** 2025-05-13 20:06:05.028497 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.028502 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.028507 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.028513 | orchestrator | 2025-05-13 20:06:05.028519 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-05-13 20:06:05.028525 | orchestrator | Tuesday 13 May 2025 20:03:37 +0000 (0:00:01.345) 0:03:56.498 *********** 2025-05-13 20:06:05.028531 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.028537 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.028543 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.028553 | orchestrator | 2025-05-13 20:06:05.028559 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-05-13 20:06:05.028565 | orchestrator | Tuesday 13 May 2025 20:03:38 +0000 (0:00:01.882) 0:03:58.380 *********** 2025-05-13 20:06:05.028789 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:06:05.028796 | orchestrator | 2025-05-13 20:06:05.028801 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-05-13 20:06:05.028807 | orchestrator | Tuesday 13 May 2025 20:03:40 +0000 (0:00:01.302) 0:03:59.683 *********** 2025-05-13 20:06:05.028814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 20:06:05.028822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.028873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.028882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 20:06:05.028893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 20:06:05.028899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.028904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.028924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.028930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.028935 | orchestrator | 2025-05-13 20:06:05.028948 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-05-13 20:06:05.028956 | orchestrator | Tuesday 13 May 2025 20:03:44 +0000 (0:00:04.555) 0:04:04.239 *********** 2025-05-13 20:06:05.028965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-13 20:06:05.028973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.028981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.028989 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.029018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-13 20:06:05.029034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.029043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.029115 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.029124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-13 20:06:05.029129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.029153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.029159 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.029164 | orchestrator | 2025-05-13 20:06:05.029169 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-05-13 20:06:05.029174 | orchestrator | Tuesday 13 May 2025 20:03:45 +0000 (0:00:01.025) 0:04:05.265 *********** 2025-05-13 20:06:05.029183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-13 20:06:05.029189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-13 20:06:05.029194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-13 20:06:05.029199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-13 20:06:05.029204 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.029209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-13 20:06:05.029214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-13 20:06:05.029219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-13 20:06:05.029224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-13 20:06:05.029229 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.029340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-13 20:06:05.029346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-13 20:06:05.029351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-13 20:06:05.029356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-13 20:06:05.029361 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.029366 | orchestrator | 2025-05-13 20:06:05.029370 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-05-13 20:06:05.029375 | orchestrator | Tuesday 13 May 2025 20:03:46 +0000 (0:00:00.937) 0:04:06.202 *********** 2025-05-13 20:06:05.029380 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.029385 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.029390 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.029394 | orchestrator | 2025-05-13 20:06:05.029399 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-05-13 20:06:05.029404 | orchestrator | Tuesday 13 May 2025 20:03:48 +0000 (0:00:01.656) 0:04:07.859 *********** 2025-05-13 20:06:05.029409 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.029413 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.029418 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.029427 | orchestrator | 2025-05-13 20:06:05.029432 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-05-13 20:06:05.029436 | orchestrator | Tuesday 13 May 2025 20:03:50 +0000 (0:00:02.073) 0:04:09.932 *********** 2025-05-13 20:06:05.029441 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:06:05.029446 | orchestrator | 2025-05-13 20:06:05.029482 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-05-13 20:06:05.029489 | orchestrator | Tuesday 13 May 2025 20:03:52 +0000 (0:00:01.610) 0:04:11.543 *********** 2025-05-13 20:06:05.029494 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-05-13 20:06:05.029499 | orchestrator | 2025-05-13 20:06:05.029504 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-05-13 20:06:05.029509 | orchestrator | Tuesday 13 May 2025 20:03:53 +0000 (0:00:01.135) 0:04:12.678 *********** 2025-05-13 20:06:05.029514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-13 20:06:05.029519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-13 20:06:05.029524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-13 20:06:05.029558 | orchestrator | 2025-05-13 20:06:05.029563 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-05-13 20:06:05.029568 | orchestrator | Tuesday 13 May 2025 20:03:57 +0000 (0:00:03.829) 0:04:16.508 *********** 2025-05-13 20:06:05.029573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-13 20:06:05.029579 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.029584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-13 20:06:05.029594 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.029599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-13 20:06:05.029619 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.029625 | orchestrator | 2025-05-13 20:06:05.029629 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-05-13 20:06:05.029634 | orchestrator | Tuesday 13 May 2025 20:03:58 +0000 (0:00:01.380) 0:04:17.888 *********** 2025-05-13 20:06:05.029656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-13 20:06:05.029662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-13 20:06:05.029668 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.029673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-13 20:06:05.029678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-13 20:06:05.029683 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.029688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-13 20:06:05.029693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-13 20:06:05.029698 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.029702 | orchestrator | 2025-05-13 20:06:05.029707 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-13 20:06:05.029712 | orchestrator | Tuesday 13 May 2025 20:04:00 +0000 (0:00:01.853) 0:04:19.742 *********** 2025-05-13 20:06:05.029717 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.029722 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.029727 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.029731 | orchestrator | 2025-05-13 20:06:05.029736 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-13 20:06:05.029741 | orchestrator | Tuesday 13 May 2025 20:04:02 +0000 (0:00:02.364) 0:04:22.107 *********** 2025-05-13 20:06:05.029746 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.029751 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.029755 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.029760 | orchestrator | 2025-05-13 20:06:05.029765 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-05-13 20:06:05.029770 | orchestrator | Tuesday 13 May 2025 20:04:05 +0000 (0:00:03.001) 0:04:25.109 *********** 2025-05-13 20:06:05.029775 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-05-13 20:06:05.029784 | orchestrator | 2025-05-13 20:06:05.029789 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-05-13 20:06:05.029794 | orchestrator | Tuesday 13 May 2025 20:04:06 +0000 (0:00:00.795) 0:04:25.904 *********** 2025-05-13 20:06:05.029799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-13 20:06:05.029804 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.029809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-13 20:06:05.029814 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.029833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-13 20:06:05.029839 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.029843 | orchestrator | 2025-05-13 20:06:05.029848 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-05-13 20:06:05.029853 | orchestrator | Tuesday 13 May 2025 20:04:07 +0000 (0:00:01.426) 0:04:27.331 *********** 2025-05-13 20:06:05.029858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-13 20:06:05.029863 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.029868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-13 20:06:05.029873 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.029887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-13 20:06:05.029897 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.029902 | orchestrator | 2025-05-13 20:06:05.029907 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-05-13 20:06:05.029912 | orchestrator | Tuesday 13 May 2025 20:04:09 +0000 (0:00:01.631) 0:04:28.962 *********** 2025-05-13 20:06:05.029917 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.029921 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.029926 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.029931 | orchestrator | 2025-05-13 20:06:05.029936 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-13 20:06:05.029945 | orchestrator | Tuesday 13 May 2025 20:04:10 +0000 (0:00:01.226) 0:04:30.189 *********** 2025-05-13 20:06:05.029953 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:06:05.029996 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:06:05.030005 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:06:05.030013 | orchestrator | 2025-05-13 20:06:05.030050 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-13 20:06:05.030057 | orchestrator | Tuesday 13 May 2025 20:04:12 +0000 (0:00:02.270) 0:04:32.459 *********** 2025-05-13 20:06:05.030071 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:06:05.030077 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:06:05.030082 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:06:05.030088 | orchestrator | 2025-05-13 20:06:05.030094 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-05-13 20:06:05.030099 | orchestrator | Tuesday 13 May 2025 20:04:15 +0000 (0:00:02.730) 0:04:35.189 *********** 2025-05-13 20:06:05.030106 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-05-13 20:06:05.030112 | orchestrator | 2025-05-13 20:06:05.030117 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-05-13 20:06:05.030123 | orchestrator | Tuesday 13 May 2025 20:04:16 +0000 (0:00:00.949) 0:04:36.139 *********** 2025-05-13 20:06:05.030130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-13 20:06:05.030136 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.030166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-13 20:06:05.030173 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.030179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-13 20:06:05.030185 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.030199 | orchestrator | 2025-05-13 20:06:05.030205 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-05-13 20:06:05.030211 | orchestrator | Tuesday 13 May 2025 20:04:17 +0000 (0:00:00.999) 0:04:37.139 *********** 2025-05-13 20:06:05.030216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-13 20:06:05.030222 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.030228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-13 20:06:05.030234 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.030240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-13 20:06:05.030246 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.030251 | orchestrator | 2025-05-13 20:06:05.030257 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-05-13 20:06:05.030263 | orchestrator | Tuesday 13 May 2025 20:04:18 +0000 (0:00:01.229) 0:04:38.369 *********** 2025-05-13 20:06:05.030269 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.030274 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.030280 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.030286 | orchestrator | 2025-05-13 20:06:05.030292 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-13 20:06:05.030298 | orchestrator | Tuesday 13 May 2025 20:04:20 +0000 (0:00:01.846) 0:04:40.215 *********** 2025-05-13 20:06:05.030304 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:06:05.030310 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:06:05.030315 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:06:05.030326 | orchestrator | 2025-05-13 20:06:05.030331 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-13 20:06:05.030336 | orchestrator | Tuesday 13 May 2025 20:04:23 +0000 (0:00:02.337) 0:04:42.552 *********** 2025-05-13 20:06:05.030341 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:06:05.030346 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:06:05.030350 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:06:05.030355 | orchestrator | 2025-05-13 20:06:05.030360 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-05-13 20:06:05.030365 | orchestrator | Tuesday 13 May 2025 20:04:26 +0000 (0:00:03.110) 0:04:45.662 *********** 2025-05-13 20:06:05.030370 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:06:05.030375 | orchestrator | 2025-05-13 20:06:05.030380 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-05-13 20:06:05.030384 | orchestrator | Tuesday 13 May 2025 20:04:27 +0000 (0:00:01.314) 0:04:46.977 *********** 2025-05-13 20:06:05.030409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-13 20:06:05.030415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-13 20:06:05.030421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-13 20:06:05.030426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-13 20:06:05.030431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-13 20:06:05.030436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-13 20:06:05.030472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.030483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-13 20:06:05.030488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-13 20:06:05.030493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.030498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-13 20:06:05.030503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-13 20:06:05.030527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-13 20:06:05.030533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-13 20:06:05.030538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.030543 | orchestrator | 2025-05-13 20:06:05.030548 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-05-13 20:06:05.030553 | orchestrator | Tuesday 13 May 2025 20:04:31 +0000 (0:00:03.675) 0:04:50.652 *********** 2025-05-13 20:06:05.030558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-13 20:06:05.030563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-13 20:06:05.030568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-13 20:06:05.030591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-13 20:06:05.030597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.030602 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.030607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-13 20:06:05.030612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-13 20:06:05.030617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-13 20:06:05.030622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-13 20:06:05.030644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.030650 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.030655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-13 20:06:05.030660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-13 20:06:05.030665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-13 20:06:05.030670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-13 20:06:05.030675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:06:05.030686 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.030691 | orchestrator | 2025-05-13 20:06:05.030696 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-05-13 20:06:05.030701 | orchestrator | Tuesday 13 May 2025 20:04:31 +0000 (0:00:00.702) 0:04:51.355 *********** 2025-05-13 20:06:05.030706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-13 20:06:05.030711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-13 20:06:05.030716 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.030734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-13 20:06:05.030740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-13 20:06:05.030745 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.030750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-13 20:06:05.030755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-13 20:06:05.030760 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.030765 | orchestrator | 2025-05-13 20:06:05.030769 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-05-13 20:06:05.030774 | orchestrator | Tuesday 13 May 2025 20:04:32 +0000 (0:00:00.847) 0:04:52.202 *********** 2025-05-13 20:06:05.030779 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.030784 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.030789 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.030793 | orchestrator | 2025-05-13 20:06:05.030798 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-05-13 20:06:05.030803 | orchestrator | Tuesday 13 May 2025 20:04:34 +0000 (0:00:01.743) 0:04:53.946 *********** 2025-05-13 20:06:05.030808 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.030812 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.030817 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.030822 | orchestrator | 2025-05-13 20:06:05.030827 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-05-13 20:06:05.030832 | orchestrator | Tuesday 13 May 2025 20:04:36 +0000 (0:00:02.140) 0:04:56.087 *********** 2025-05-13 20:06:05.030836 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:06:05.030841 | orchestrator | 2025-05-13 20:06:05.030846 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-05-13 20:06:05.030850 | orchestrator | Tuesday 13 May 2025 20:04:37 +0000 (0:00:01.367) 0:04:57.455 *********** 2025-05-13 20:06:05.030856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 20:06:05.030866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 20:06:05.030885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 20:06:05.030891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 20:06:05.030897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 20:06:05.030908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 20:06:05.030914 | orchestrator | 2025-05-13 20:06:05.030919 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-05-13 20:06:05.030923 | orchestrator | Tuesday 13 May 2025 20:04:43 +0000 (0:00:05.437) 0:05:02.893 *********** 2025-05-13 20:06:05.030944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-13 20:06:05.030955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-13 20:06:05.030963 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.030971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-13 20:06:05.030985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-13 20:06:05.030995 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.031025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-13 20:06:05.031031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-13 20:06:05.031037 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.031041 | orchestrator | 2025-05-13 20:06:05.031046 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-05-13 20:06:05.031051 | orchestrator | Tuesday 13 May 2025 20:04:44 +0000 (0:00:00.974) 0:05:03.867 *********** 2025-05-13 20:06:05.031056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-13 20:06:05.031065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-13 20:06:05.031070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-13 20:06:05.031075 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.031079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-13 20:06:05.031084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-13 20:06:05.031089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-13 20:06:05.031095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-13 20:06:05.031100 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.031105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-13 20:06:05.031110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-13 20:06:05.031115 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.031119 | orchestrator | 2025-05-13 20:06:05.031124 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-05-13 20:06:05.031129 | orchestrator | Tuesday 13 May 2025 20:04:45 +0000 (0:00:00.887) 0:05:04.755 *********** 2025-05-13 20:06:05.031134 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.031139 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.031143 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.031148 | orchestrator | 2025-05-13 20:06:05.031153 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-05-13 20:06:05.031158 | orchestrator | Tuesday 13 May 2025 20:04:45 +0000 (0:00:00.438) 0:05:05.193 *********** 2025-05-13 20:06:05.031162 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.031178 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.031186 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.031191 | orchestrator | 2025-05-13 20:06:05.031196 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-05-13 20:06:05.031201 | orchestrator | Tuesday 13 May 2025 20:04:47 +0000 (0:00:01.347) 0:05:06.540 *********** 2025-05-13 20:06:05.031206 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:06:05.031210 | orchestrator | 2025-05-13 20:06:05.031215 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-05-13 20:06:05.031220 | orchestrator | Tuesday 13 May 2025 20:04:48 +0000 (0:00:01.688) 0:05:08.229 *********** 2025-05-13 20:06:05.031225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-13 20:06:05.031234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 20:06:05.031239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:06:05.031244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:06:05.031250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 20:06:05.031255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-13 20:06:05.031274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 20:06:05.031283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:06:05.031288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:06:05.031293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 20:06:05.031302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-13 20:06:05.031307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 20:06:05.031312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:06:05.031332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:06:05.031341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 20:06:05.031347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-13 20:06:05.031352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-13 20:06:05.031357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:06:05.031362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:06:05.031370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-13 20:06:05.031382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-13 20:06:05.031387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-13 20:06:05.031392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:06:05.031397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:06:05.031403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-13 20:06:05.031413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-13 20:06:05.031422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-13 20:06:05.031428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:06:05.031433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:06:05.031438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-13 20:06:05.031443 | orchestrator | 2025-05-13 20:06:05.031448 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-05-13 20:06:05.031453 | orchestrator | Tuesday 13 May 2025 20:04:52 +0000 (0:00:04.193) 0:05:12.422 *********** 2025-05-13 20:06:05.031458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-13 20:06:05.031501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 20:06:05.031521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:06:05.031527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:06:05.031532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 20:06:05.031537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-13 20:06:05.031543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-13 20:06:05.031548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:06:05.031562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:06:05.031567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-13 20:06:05.031572 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.031577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-13 20:06:05.031582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 20:06:05.031587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:06:05.031592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:06:05.031598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 20:06:05.031612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-13 20:06:05.031618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-13 20:06:05.031623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:06:05.031628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:06:05.031633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-13 20:06:05.031641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-13 20:06:05.031646 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.031657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 20:06:05.031662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:06:05.031667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:06:05.031672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 20:06:05.031677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-13 20:06:05.031683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-13 20:06:05.031691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:06:05.031700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:06:05.031705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-13 20:06:05.031710 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.031715 | orchestrator | 2025-05-13 20:06:05.031720 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-05-13 20:06:05.031725 | orchestrator | Tuesday 13 May 2025 20:04:54 +0000 (0:00:01.455) 0:05:13.878 *********** 2025-05-13 20:06:05.031729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-13 20:06:05.031734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-13 20:06:05.031740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-13 20:06:05.031758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-13 20:06:05.031765 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.031769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-13 20:06:05.031774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-13 20:06:05.031783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-13 20:06:05.031788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-13 20:06:05.031793 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.031798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-13 20:06:05.031802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-13 20:06:05.031807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-13 20:06:05.031817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-13 20:06:05.031823 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.031828 | orchestrator | 2025-05-13 20:06:05.031832 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-05-13 20:06:05.031837 | orchestrator | Tuesday 13 May 2025 20:04:55 +0000 (0:00:01.047) 0:05:14.925 *********** 2025-05-13 20:06:05.031842 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.031847 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.031852 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.031856 | orchestrator | 2025-05-13 20:06:05.031861 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-05-13 20:06:05.031866 | orchestrator | Tuesday 13 May 2025 20:04:55 +0000 (0:00:00.454) 0:05:15.379 *********** 2025-05-13 20:06:05.031871 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.031876 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.031880 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.031885 | orchestrator | 2025-05-13 20:06:05.031890 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-05-13 20:06:05.031895 | orchestrator | Tuesday 13 May 2025 20:04:57 +0000 (0:00:01.734) 0:05:17.114 *********** 2025-05-13 20:06:05.031899 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:06:05.031904 | orchestrator | 2025-05-13 20:06:05.031909 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-05-13 20:06:05.031914 | orchestrator | Tuesday 13 May 2025 20:04:59 +0000 (0:00:01.702) 0:05:18.817 *********** 2025-05-13 20:06:05.031919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-13 20:06:05.031927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-13 20:06:05.031935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-13 20:06:05.031943 | orchestrator | 2025-05-13 20:06:05.031955 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-05-13 20:06:05.031963 | orchestrator | Tuesday 13 May 2025 20:05:01 +0000 (0:00:02.524) 0:05:21.341 *********** 2025-05-13 20:06:05.031971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-13 20:06:05.031978 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.031987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-13 20:06:05.032000 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.032008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-13 20:06:05.032017 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.032022 | orchestrator | 2025-05-13 20:06:05.032026 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-05-13 20:06:05.032031 | orchestrator | Tuesday 13 May 2025 20:05:02 +0000 (0:00:00.378) 0:05:21.720 *********** 2025-05-13 20:06:05.032036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-13 20:06:05.032040 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.032045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-13 20:06:05.032049 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.032053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-13 20:06:05.032058 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.032063 | orchestrator | 2025-05-13 20:06:05.032067 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-05-13 20:06:05.032079 | orchestrator | Tuesday 13 May 2025 20:05:03 +0000 (0:00:00.995) 0:05:22.716 *********** 2025-05-13 20:06:05.032084 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.032088 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.032093 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.032097 | orchestrator | 2025-05-13 20:06:05.032102 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-05-13 20:06:05.032107 | orchestrator | Tuesday 13 May 2025 20:05:03 +0000 (0:00:00.420) 0:05:23.137 *********** 2025-05-13 20:06:05.032111 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.032116 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.032120 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.032125 | orchestrator | 2025-05-13 20:06:05.032129 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-05-13 20:06:05.032134 | orchestrator | Tuesday 13 May 2025 20:05:04 +0000 (0:00:01.269) 0:05:24.406 *********** 2025-05-13 20:06:05.032138 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:06:05.032146 | orchestrator | 2025-05-13 20:06:05.032151 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-05-13 20:06:05.032156 | orchestrator | Tuesday 13 May 2025 20:05:06 +0000 (0:00:01.807) 0:05:26.214 *********** 2025-05-13 20:06:05.032160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-13 20:06:05.032166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-13 20:06:05.032170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-13 20:06:05.032181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-13 20:06:05.032189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-13 20:06:05.032194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-13 20:06:05.032199 | orchestrator | 2025-05-13 20:06:05.032203 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-05-13 20:06:05.032208 | orchestrator | Tuesday 13 May 2025 20:05:13 +0000 (0:00:06.365) 0:05:32.579 *********** 2025-05-13 20:06:05.032213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-13 20:06:05.032222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-13 20:06:05.032230 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.032235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-13 20:06:05.032240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-13 20:06:05.032245 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.032250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-13 20:06:05.032255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-13 20:06:05.032259 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.032264 | orchestrator | 2025-05-13 20:06:05.032273 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-05-13 20:06:05.032281 | orchestrator | Tuesday 13 May 2025 20:05:13 +0000 (0:00:00.628) 0:05:33.207 *********** 2025-05-13 20:06:05.032286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-13 20:06:05.032291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-13 20:06:05.032295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-13 20:06:05.032300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-13 20:06:05.032305 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.032309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-13 20:06:05.032314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-13 20:06:05.032319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-13 20:06:05.032323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-13 20:06:05.032328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-13 20:06:05.032333 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.032337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-13 20:06:05.032342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-13 20:06:05.032346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-13 20:06:05.032351 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.032356 | orchestrator | 2025-05-13 20:06:05.032360 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-05-13 20:06:05.032365 | orchestrator | Tuesday 13 May 2025 20:05:15 +0000 (0:00:01.551) 0:05:34.758 *********** 2025-05-13 20:06:05.032369 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.032374 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.032379 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.032383 | orchestrator | 2025-05-13 20:06:05.032388 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-05-13 20:06:05.032392 | orchestrator | Tuesday 13 May 2025 20:05:16 +0000 (0:00:01.302) 0:05:36.061 *********** 2025-05-13 20:06:05.032401 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.032406 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.032410 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.032415 | orchestrator | 2025-05-13 20:06:05.032419 | orchestrator | TASK [include_role : swift] **************************************************** 2025-05-13 20:06:05.032424 | orchestrator | Tuesday 13 May 2025 20:05:18 +0000 (0:00:02.145) 0:05:38.207 *********** 2025-05-13 20:06:05.032428 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.032433 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.032437 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.032442 | orchestrator | 2025-05-13 20:06:05.032446 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-05-13 20:06:05.032451 | orchestrator | Tuesday 13 May 2025 20:05:19 +0000 (0:00:00.305) 0:05:38.513 *********** 2025-05-13 20:06:05.032455 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.032474 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.032479 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.032483 | orchestrator | 2025-05-13 20:06:05.032494 | orchestrator | TASK [include_role : trove] **************************************************** 2025-05-13 20:06:05.032499 | orchestrator | Tuesday 13 May 2025 20:05:19 +0000 (0:00:00.572) 0:05:39.086 *********** 2025-05-13 20:06:05.032503 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.032508 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.032512 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.032517 | orchestrator | 2025-05-13 20:06:05.032522 | orchestrator | TASK [include_role : venus] **************************************************** 2025-05-13 20:06:05.032526 | orchestrator | Tuesday 13 May 2025 20:05:19 +0000 (0:00:00.297) 0:05:39.384 *********** 2025-05-13 20:06:05.032531 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.032535 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.032540 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.032544 | orchestrator | 2025-05-13 20:06:05.032549 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-05-13 20:06:05.032554 | orchestrator | Tuesday 13 May 2025 20:05:20 +0000 (0:00:00.319) 0:05:39.703 *********** 2025-05-13 20:06:05.032558 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.032563 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.032567 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.032572 | orchestrator | 2025-05-13 20:06:05.032576 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-05-13 20:06:05.032581 | orchestrator | Tuesday 13 May 2025 20:05:20 +0000 (0:00:00.317) 0:05:40.021 *********** 2025-05-13 20:06:05.032585 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.032590 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.032594 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.032599 | orchestrator | 2025-05-13 20:06:05.032603 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-05-13 20:06:05.032608 | orchestrator | Tuesday 13 May 2025 20:05:21 +0000 (0:00:00.796) 0:05:40.817 *********** 2025-05-13 20:06:05.032613 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:06:05.032617 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:06:05.032622 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:06:05.032626 | orchestrator | 2025-05-13 20:06:05.032631 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-05-13 20:06:05.032635 | orchestrator | Tuesday 13 May 2025 20:05:21 +0000 (0:00:00.639) 0:05:41.457 *********** 2025-05-13 20:06:05.032640 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:06:05.032645 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:06:05.032649 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:06:05.032654 | orchestrator | 2025-05-13 20:06:05.032658 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-05-13 20:06:05.032663 | orchestrator | Tuesday 13 May 2025 20:05:22 +0000 (0:00:00.347) 0:05:41.804 *********** 2025-05-13 20:06:05.032667 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:06:05.032676 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:06:05.032680 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:06:05.032685 | orchestrator | 2025-05-13 20:06:05.032689 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-05-13 20:06:05.032694 | orchestrator | Tuesday 13 May 2025 20:05:23 +0000 (0:00:01.195) 0:05:43.000 *********** 2025-05-13 20:06:05.032699 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:06:05.032703 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:06:05.032708 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:06:05.032712 | orchestrator | 2025-05-13 20:06:05.032717 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-05-13 20:06:05.032721 | orchestrator | Tuesday 13 May 2025 20:05:24 +0000 (0:00:00.867) 0:05:43.867 *********** 2025-05-13 20:06:05.032726 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:06:05.032730 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:06:05.032735 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:06:05.032739 | orchestrator | 2025-05-13 20:06:05.032744 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-05-13 20:06:05.032749 | orchestrator | Tuesday 13 May 2025 20:05:25 +0000 (0:00:00.886) 0:05:44.753 *********** 2025-05-13 20:06:05.032753 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.032758 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.032763 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.032767 | orchestrator | 2025-05-13 20:06:05.032772 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-05-13 20:06:05.032776 | orchestrator | Tuesday 13 May 2025 20:05:30 +0000 (0:00:04.838) 0:05:49.592 *********** 2025-05-13 20:06:05.032781 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:06:05.032785 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:06:05.032790 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:06:05.032794 | orchestrator | 2025-05-13 20:06:05.032799 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-05-13 20:06:05.032804 | orchestrator | Tuesday 13 May 2025 20:05:33 +0000 (0:00:03.725) 0:05:53.317 *********** 2025-05-13 20:06:05.032808 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.032813 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.032817 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.032822 | orchestrator | 2025-05-13 20:06:05.032826 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-05-13 20:06:05.032831 | orchestrator | Tuesday 13 May 2025 20:05:47 +0000 (0:00:13.210) 0:06:06.528 *********** 2025-05-13 20:06:05.032835 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:06:05.032840 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:06:05.032844 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:06:05.032849 | orchestrator | 2025-05-13 20:06:05.032853 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-05-13 20:06:05.032858 | orchestrator | Tuesday 13 May 2025 20:05:47 +0000 (0:00:00.742) 0:06:07.271 *********** 2025-05-13 20:06:05.032863 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:06:05.032867 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:06:05.032872 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:06:05.032876 | orchestrator | 2025-05-13 20:06:05.032881 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-05-13 20:06:05.032885 | orchestrator | Tuesday 13 May 2025 20:05:57 +0000 (0:00:09.298) 0:06:16.569 *********** 2025-05-13 20:06:05.032890 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.032894 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.032899 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.032903 | orchestrator | 2025-05-13 20:06:05.032908 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-05-13 20:06:05.032917 | orchestrator | Tuesday 13 May 2025 20:05:57 +0000 (0:00:00.339) 0:06:16.909 *********** 2025-05-13 20:06:05.032922 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.032927 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.032931 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.032941 | orchestrator | 2025-05-13 20:06:05.032948 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-05-13 20:06:05.032956 | orchestrator | Tuesday 13 May 2025 20:05:58 +0000 (0:00:00.666) 0:06:17.575 *********** 2025-05-13 20:06:05.032963 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.032970 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.032978 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.032985 | orchestrator | 2025-05-13 20:06:05.032990 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-05-13 20:06:05.032994 | orchestrator | Tuesday 13 May 2025 20:05:58 +0000 (0:00:00.354) 0:06:17.929 *********** 2025-05-13 20:06:05.032999 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.033003 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.033008 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.033012 | orchestrator | 2025-05-13 20:06:05.033017 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-05-13 20:06:05.033022 | orchestrator | Tuesday 13 May 2025 20:05:58 +0000 (0:00:00.323) 0:06:18.253 *********** 2025-05-13 20:06:05.033026 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.033030 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.033035 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.033040 | orchestrator | 2025-05-13 20:06:05.033044 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-05-13 20:06:05.033049 | orchestrator | Tuesday 13 May 2025 20:05:59 +0000 (0:00:00.330) 0:06:18.584 *********** 2025-05-13 20:06:05.033054 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:06:05.033058 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:06:05.033063 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:06:05.033067 | orchestrator | 2025-05-13 20:06:05.033072 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-05-13 20:06:05.033076 | orchestrator | Tuesday 13 May 2025 20:05:59 +0000 (0:00:00.693) 0:06:19.277 *********** 2025-05-13 20:06:05.033081 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:06:05.033085 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:06:05.033090 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:06:05.033095 | orchestrator | 2025-05-13 20:06:05.033099 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-05-13 20:06:05.033104 | orchestrator | Tuesday 13 May 2025 20:06:00 +0000 (0:00:00.925) 0:06:20.202 *********** 2025-05-13 20:06:05.033108 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:06:05.033113 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:06:05.033117 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:06:05.033122 | orchestrator | 2025-05-13 20:06:05.033127 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:06:05.033134 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-05-13 20:06:05.033141 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-05-13 20:06:05.033149 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-05-13 20:06:05.033156 | orchestrator | 2025-05-13 20:06:05.033164 | orchestrator | 2025-05-13 20:06:05.033171 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:06:05.033178 | orchestrator | Tuesday 13 May 2025 20:06:01 +0000 (0:00:00.795) 0:06:20.998 *********** 2025-05-13 20:06:05.033186 | orchestrator | =============================================================================== 2025-05-13 20:06:05.033194 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.21s 2025-05-13 20:06:05.033203 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.30s 2025-05-13 20:06:05.033207 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 6.70s 2025-05-13 20:06:05.033217 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.54s 2025-05-13 20:06:05.033221 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.37s 2025-05-13 20:06:05.033226 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.93s 2025-05-13 20:06:05.033231 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.44s 2025-05-13 20:06:05.033235 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.84s 2025-05-13 20:06:05.033240 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.56s 2025-05-13 20:06:05.033244 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.25s 2025-05-13 20:06:05.033249 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.19s 2025-05-13 20:06:05.033253 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.19s 2025-05-13 20:06:05.033258 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.18s 2025-05-13 20:06:05.033263 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.15s 2025-05-13 20:06:05.033267 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 3.83s 2025-05-13 20:06:05.033272 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.83s 2025-05-13 20:06:05.033276 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.77s 2025-05-13 20:06:05.033281 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.76s 2025-05-13 20:06:05.033288 | orchestrator | loadbalancer : Wait for backup haproxy to start ------------------------- 3.73s 2025-05-13 20:06:05.033296 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.68s 2025-05-13 20:06:08.054315 | orchestrator | 2025-05-13 20:06:08 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:06:08.054747 | orchestrator | 2025-05-13 20:06:08 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:06:08.055437 | orchestrator | 2025-05-13 20:06:08 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:06:08.057962 | orchestrator | 2025-05-13 20:06:08 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:06:11.111146 | orchestrator | 2025-05-13 20:06:11 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:06:11.113810 | orchestrator | 2025-05-13 20:06:11 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:06:11.113881 | orchestrator | 2025-05-13 20:06:11 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:06:11.113895 | orchestrator | 2025-05-13 20:06:11 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:06:14.168270 | orchestrator | 2025-05-13 20:06:14 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:06:14.168390 | orchestrator | 2025-05-13 20:06:14 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:06:14.168414 | orchestrator | 2025-05-13 20:06:14 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:06:14.168435 | orchestrator | 2025-05-13 20:06:14 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:06:17.209343 | orchestrator | 2025-05-13 20:06:17 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:06:17.209648 | orchestrator | 2025-05-13 20:06:17 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:06:17.210621 | orchestrator | 2025-05-13 20:06:17 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:06:17.210651 | orchestrator | 2025-05-13 20:06:17 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:06:20.250725 | orchestrator | 2025-05-13 20:06:20 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:06:20.250955 | orchestrator | 2025-05-13 20:06:20 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:06:20.253420 | orchestrator | 2025-05-13 20:06:20 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:06:20.253490 | orchestrator | 2025-05-13 20:06:20 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:06:23.301306 | orchestrator | 2025-05-13 20:06:23 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:06:23.301423 | orchestrator | 2025-05-13 20:06:23 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:06:23.301550 | orchestrator | 2025-05-13 20:06:23 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:06:23.301564 | orchestrator | 2025-05-13 20:06:23 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:06:26.359588 | orchestrator | 2025-05-13 20:06:26 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:06:26.359713 | orchestrator | 2025-05-13 20:06:26 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:06:26.359730 | orchestrator | 2025-05-13 20:06:26 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:06:26.359743 | orchestrator | 2025-05-13 20:06:26 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:06:29.402115 | orchestrator | 2025-05-13 20:06:29 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:06:29.402239 | orchestrator | 2025-05-13 20:06:29 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:06:29.402254 | orchestrator | 2025-05-13 20:06:29 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:06:29.402266 | orchestrator | 2025-05-13 20:06:29 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:06:32.441905 | orchestrator | 2025-05-13 20:06:32 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:06:32.442359 | orchestrator | 2025-05-13 20:06:32 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:06:32.443184 | orchestrator | 2025-05-13 20:06:32 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:06:32.443228 | orchestrator | 2025-05-13 20:06:32 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:06:35.485902 | orchestrator | 2025-05-13 20:06:35 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:06:35.490366 | orchestrator | 2025-05-13 20:06:35 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:06:35.492487 | orchestrator | 2025-05-13 20:06:35 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:06:35.492561 | orchestrator | 2025-05-13 20:06:35 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:06:38.553093 | orchestrator | 2025-05-13 20:06:38 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:06:38.553736 | orchestrator | 2025-05-13 20:06:38 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:06:38.555364 | orchestrator | 2025-05-13 20:06:38 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:06:38.555467 | orchestrator | 2025-05-13 20:06:38 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:06:41.597760 | orchestrator | 2025-05-13 20:06:41 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:06:41.599774 | orchestrator | 2025-05-13 20:06:41 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:06:41.601659 | orchestrator | 2025-05-13 20:06:41 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:06:41.601686 | orchestrator | 2025-05-13 20:06:41 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:06:44.654593 | orchestrator | 2025-05-13 20:06:44 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:06:44.661894 | orchestrator | 2025-05-13 20:06:44 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:06:44.664070 | orchestrator | 2025-05-13 20:06:44 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:06:44.664098 | orchestrator | 2025-05-13 20:06:44 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:06:47.717330 | orchestrator | 2025-05-13 20:06:47 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:06:47.719420 | orchestrator | 2025-05-13 20:06:47 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:06:47.723280 | orchestrator | 2025-05-13 20:06:47 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:06:47.723338 | orchestrator | 2025-05-13 20:06:47 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:06:50.772249 | orchestrator | 2025-05-13 20:06:50 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:06:50.772466 | orchestrator | 2025-05-13 20:06:50 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:06:50.774673 | orchestrator | 2025-05-13 20:06:50 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:06:50.775294 | orchestrator | 2025-05-13 20:06:50 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:06:53.817196 | orchestrator | 2025-05-13 20:06:53 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:06:53.818996 | orchestrator | 2025-05-13 20:06:53 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:06:53.821571 | orchestrator | 2025-05-13 20:06:53 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:06:53.821662 | orchestrator | 2025-05-13 20:06:53 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:06:56.864834 | orchestrator | 2025-05-13 20:06:56 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:06:56.867496 | orchestrator | 2025-05-13 20:06:56 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:06:56.869520 | orchestrator | 2025-05-13 20:06:56 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:06:56.869575 | orchestrator | 2025-05-13 20:06:56 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:06:59.925450 | orchestrator | 2025-05-13 20:06:59 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:06:59.927680 | orchestrator | 2025-05-13 20:06:59 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:06:59.930691 | orchestrator | 2025-05-13 20:06:59 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:06:59.931073 | orchestrator | 2025-05-13 20:06:59 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:07:02.980739 | orchestrator | 2025-05-13 20:07:02 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:07:02.982687 | orchestrator | 2025-05-13 20:07:02 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:07:02.984846 | orchestrator | 2025-05-13 20:07:02 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:07:02.984886 | orchestrator | 2025-05-13 20:07:02 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:07:06.035072 | orchestrator | 2025-05-13 20:07:06 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:07:06.035496 | orchestrator | 2025-05-13 20:07:06 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:07:06.035531 | orchestrator | 2025-05-13 20:07:06 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:07:06.035544 | orchestrator | 2025-05-13 20:07:06 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:07:09.085817 | orchestrator | 2025-05-13 20:07:09 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:07:09.087023 | orchestrator | 2025-05-13 20:07:09 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:07:09.088576 | orchestrator | 2025-05-13 20:07:09 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:07:09.088696 | orchestrator | 2025-05-13 20:07:09 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:07:12.134709 | orchestrator | 2025-05-13 20:07:12 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:07:12.135790 | orchestrator | 2025-05-13 20:07:12 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:07:12.137433 | orchestrator | 2025-05-13 20:07:12 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:07:12.137612 | orchestrator | 2025-05-13 20:07:12 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:07:15.191550 | orchestrator | 2025-05-13 20:07:15 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:07:15.193030 | orchestrator | 2025-05-13 20:07:15 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:07:15.195073 | orchestrator | 2025-05-13 20:07:15 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:07:15.195116 | orchestrator | 2025-05-13 20:07:15 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:07:18.246461 | orchestrator | 2025-05-13 20:07:18 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:07:18.248619 | orchestrator | 2025-05-13 20:07:18 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:07:18.248667 | orchestrator | 2025-05-13 20:07:18 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:07:18.248689 | orchestrator | 2025-05-13 20:07:18 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:07:21.307718 | orchestrator | 2025-05-13 20:07:21 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:07:21.309056 | orchestrator | 2025-05-13 20:07:21 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:07:21.311680 | orchestrator | 2025-05-13 20:07:21 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:07:21.311920 | orchestrator | 2025-05-13 20:07:21 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:07:24.383222 | orchestrator | 2025-05-13 20:07:24 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:07:24.385224 | orchestrator | 2025-05-13 20:07:24 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:07:24.386700 | orchestrator | 2025-05-13 20:07:24 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:07:24.386741 | orchestrator | 2025-05-13 20:07:24 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:07:27.446438 | orchestrator | 2025-05-13 20:07:27 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:07:27.447842 | orchestrator | 2025-05-13 20:07:27 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:07:27.449714 | orchestrator | 2025-05-13 20:07:27 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:07:27.449760 | orchestrator | 2025-05-13 20:07:27 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:07:30.509801 | orchestrator | 2025-05-13 20:07:30 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:07:30.511977 | orchestrator | 2025-05-13 20:07:30 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:07:30.514523 | orchestrator | 2025-05-13 20:07:30 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:07:30.514556 | orchestrator | 2025-05-13 20:07:30 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:07:33.564263 | orchestrator | 2025-05-13 20:07:33 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:07:33.565053 | orchestrator | 2025-05-13 20:07:33 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:07:33.565856 | orchestrator | 2025-05-13 20:07:33 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:07:33.565890 | orchestrator | 2025-05-13 20:07:33 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:07:36.621577 | orchestrator | 2025-05-13 20:07:36 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:07:36.623067 | orchestrator | 2025-05-13 20:07:36 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:07:36.626249 | orchestrator | 2025-05-13 20:07:36 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:07:36.626365 | orchestrator | 2025-05-13 20:07:36 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:07:39.679573 | orchestrator | 2025-05-13 20:07:39 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:07:39.683570 | orchestrator | 2025-05-13 20:07:39 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:07:39.685361 | orchestrator | 2025-05-13 20:07:39 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:07:39.685855 | orchestrator | 2025-05-13 20:07:39 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:07:42.724360 | orchestrator | 2025-05-13 20:07:42 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:07:42.724507 | orchestrator | 2025-05-13 20:07:42 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:07:42.725009 | orchestrator | 2025-05-13 20:07:42 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:07:42.725047 | orchestrator | 2025-05-13 20:07:42 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:07:45.773374 | orchestrator | 2025-05-13 20:07:45 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:07:45.776491 | orchestrator | 2025-05-13 20:07:45 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:07:45.778650 | orchestrator | 2025-05-13 20:07:45 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:07:45.779356 | orchestrator | 2025-05-13 20:07:45 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:07:48.832373 | orchestrator | 2025-05-13 20:07:48 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:07:48.834277 | orchestrator | 2025-05-13 20:07:48 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:07:48.835806 | orchestrator | 2025-05-13 20:07:48 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:07:48.835887 | orchestrator | 2025-05-13 20:07:48 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:07:51.885709 | orchestrator | 2025-05-13 20:07:51 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:07:51.888946 | orchestrator | 2025-05-13 20:07:51 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:07:51.891367 | orchestrator | 2025-05-13 20:07:51 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:07:51.891424 | orchestrator | 2025-05-13 20:07:51 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:07:54.932658 | orchestrator | 2025-05-13 20:07:54 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:07:54.933088 | orchestrator | 2025-05-13 20:07:54 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:07:54.934488 | orchestrator | 2025-05-13 20:07:54 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:07:54.935070 | orchestrator | 2025-05-13 20:07:54 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:07:57.986411 | orchestrator | 2025-05-13 20:07:57 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:07:57.987515 | orchestrator | 2025-05-13 20:07:57 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:07:57.989351 | orchestrator | 2025-05-13 20:07:57 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:07:57.990224 | orchestrator | 2025-05-13 20:07:57 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:08:01.041135 | orchestrator | 2025-05-13 20:08:01 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:08:01.042002 | orchestrator | 2025-05-13 20:08:01 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:08:01.043355 | orchestrator | 2025-05-13 20:08:01 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:08:01.043399 | orchestrator | 2025-05-13 20:08:01 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:08:04.095732 | orchestrator | 2025-05-13 20:08:04 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:08:04.096675 | orchestrator | 2025-05-13 20:08:04 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:08:04.098680 | orchestrator | 2025-05-13 20:08:04 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state STARTED 2025-05-13 20:08:04.098745 | orchestrator | 2025-05-13 20:08:04 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:08:07.145506 | orchestrator | 2025-05-13 20:08:07 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:08:07.147807 | orchestrator | 2025-05-13 20:08:07 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:08:07.149675 | orchestrator | 2025-05-13 20:08:07 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:08:07.157318 | orchestrator | 2025-05-13 20:08:07 | INFO  | Task 50c61596-ef47-4202-962e-5d0b51567576 is in state SUCCESS 2025-05-13 20:08:07.160483 | orchestrator | 2025-05-13 20:08:07.160553 | orchestrator | 2025-05-13 20:08:07.160567 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-05-13 20:08:07.160580 | orchestrator | 2025-05-13 20:08:07.160591 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-05-13 20:08:07.160602 | orchestrator | Tuesday 13 May 2025 19:56:49 +0000 (0:00:00.701) 0:00:00.701 *********** 2025-05-13 20:08:07.160965 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.160983 | orchestrator | 2025-05-13 20:08:07.161023 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-05-13 20:08:07.161035 | orchestrator | Tuesday 13 May 2025 19:56:50 +0000 (0:00:00.967) 0:00:01.668 *********** 2025-05-13 20:08:07.161045 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.161056 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.161066 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.161076 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.161085 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.161095 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.161105 | orchestrator | 2025-05-13 20:08:07.161114 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-05-13 20:08:07.161124 | orchestrator | Tuesday 13 May 2025 19:56:51 +0000 (0:00:01.411) 0:00:03.079 *********** 2025-05-13 20:08:07.161134 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.161144 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.161153 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.161163 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.161179 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.161226 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.161245 | orchestrator | 2025-05-13 20:08:07.161287 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-05-13 20:08:07.161302 | orchestrator | Tuesday 13 May 2025 19:56:52 +0000 (0:00:00.784) 0:00:03.864 *********** 2025-05-13 20:08:07.161317 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.161333 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.161349 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.161366 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.161382 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.161398 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.161412 | orchestrator | 2025-05-13 20:08:07.161423 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-05-13 20:08:07.161516 | orchestrator | Tuesday 13 May 2025 19:56:53 +0000 (0:00:01.092) 0:00:04.957 *********** 2025-05-13 20:08:07.161528 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.161538 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.161548 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.161557 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.161566 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.161598 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.161611 | orchestrator | 2025-05-13 20:08:07.161623 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-05-13 20:08:07.161635 | orchestrator | Tuesday 13 May 2025 19:56:54 +0000 (0:00:00.811) 0:00:05.768 *********** 2025-05-13 20:08:07.161646 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.161658 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.161669 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.161679 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.161690 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.161717 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.161728 | orchestrator | 2025-05-13 20:08:07.161738 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-05-13 20:08:07.161750 | orchestrator | Tuesday 13 May 2025 19:56:54 +0000 (0:00:00.621) 0:00:06.389 *********** 2025-05-13 20:08:07.161781 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.161793 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.161804 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.161814 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.161825 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.161837 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.161847 | orchestrator | 2025-05-13 20:08:07.161859 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-05-13 20:08:07.161870 | orchestrator | Tuesday 13 May 2025 19:56:55 +0000 (0:00:01.026) 0:00:07.416 *********** 2025-05-13 20:08:07.161881 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.161894 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.161906 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.161917 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.161928 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.161940 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.161950 | orchestrator | 2025-05-13 20:08:07.161960 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-05-13 20:08:07.161970 | orchestrator | Tuesday 13 May 2025 19:56:56 +0000 (0:00:01.023) 0:00:08.440 *********** 2025-05-13 20:08:07.161980 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.161989 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.161999 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.162008 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.162069 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.162080 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.162089 | orchestrator | 2025-05-13 20:08:07.162099 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-13 20:08:07.162108 | orchestrator | Tuesday 13 May 2025 19:56:57 +0000 (0:00:01.064) 0:00:09.504 *********** 2025-05-13 20:08:07.162118 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-13 20:08:07.162129 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-13 20:08:07.162138 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-13 20:08:07.162148 | orchestrator | 2025-05-13 20:08:07.162157 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-05-13 20:08:07.162166 | orchestrator | Tuesday 13 May 2025 19:56:58 +0000 (0:00:00.860) 0:00:10.365 *********** 2025-05-13 20:08:07.162176 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.162185 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.162195 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.162204 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.162213 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.162222 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.162232 | orchestrator | 2025-05-13 20:08:07.162275 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-05-13 20:08:07.162286 | orchestrator | Tuesday 13 May 2025 19:56:59 +0000 (0:00:01.067) 0:00:11.433 *********** 2025-05-13 20:08:07.162296 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-13 20:08:07.162305 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-13 20:08:07.162315 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-13 20:08:07.162324 | orchestrator | 2025-05-13 20:08:07.162334 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-05-13 20:08:07.162343 | orchestrator | Tuesday 13 May 2025 19:57:02 +0000 (0:00:02.514) 0:00:13.947 *********** 2025-05-13 20:08:07.162353 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-13 20:08:07.162363 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-13 20:08:07.162373 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-13 20:08:07.162382 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.162391 | orchestrator | 2025-05-13 20:08:07.162409 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-05-13 20:08:07.162418 | orchestrator | Tuesday 13 May 2025 19:57:03 +0000 (0:00:00.789) 0:00:14.737 *********** 2025-05-13 20:08:07.162430 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.162443 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.162545 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.162556 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.162566 | orchestrator | 2025-05-13 20:08:07.162576 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-05-13 20:08:07.162586 | orchestrator | Tuesday 13 May 2025 19:57:04 +0000 (0:00:00.937) 0:00:15.675 *********** 2025-05-13 20:08:07.162604 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.162617 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.162627 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.162637 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.162647 | orchestrator | 2025-05-13 20:08:07.162657 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-05-13 20:08:07.162667 | orchestrator | Tuesday 13 May 2025 19:57:04 +0000 (0:00:00.158) 0:00:15.834 *********** 2025-05-13 20:08:07.162679 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-13 19:57:00.445854', 'end': '2025-05-13 19:57:00.730767', 'delta': '0:00:00.284913', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.162701 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-13 19:57:01.298363', 'end': '2025-05-13 19:57:01.561057', 'delta': '0:00:00.262694', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.162719 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-13 19:57:01.977201', 'end': '2025-05-13 19:57:02.234747', 'delta': '0:00:00.257546', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.162729 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.162739 | orchestrator | 2025-05-13 20:08:07.162749 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-05-13 20:08:07.162759 | orchestrator | Tuesday 13 May 2025 19:57:04 +0000 (0:00:00.226) 0:00:16.060 *********** 2025-05-13 20:08:07.162769 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.162778 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.162788 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.162797 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.162807 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.162816 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.162826 | orchestrator | 2025-05-13 20:08:07.162835 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-05-13 20:08:07.162845 | orchestrator | Tuesday 13 May 2025 19:57:05 +0000 (0:00:01.203) 0:00:17.263 *********** 2025-05-13 20:08:07.162854 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.162864 | orchestrator | 2025-05-13 20:08:07.162874 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-05-13 20:08:07.162888 | orchestrator | Tuesday 13 May 2025 19:57:06 +0000 (0:00:00.813) 0:00:18.077 *********** 2025-05-13 20:08:07.162898 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.162908 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.162917 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.162927 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.162936 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.162946 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.162955 | orchestrator | 2025-05-13 20:08:07.162965 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-05-13 20:08:07.162974 | orchestrator | Tuesday 13 May 2025 19:57:07 +0000 (0:00:00.962) 0:00:19.040 *********** 2025-05-13 20:08:07.162984 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.162994 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.163003 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.163012 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.163022 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.163031 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.163041 | orchestrator | 2025-05-13 20:08:07.163050 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-13 20:08:07.163060 | orchestrator | Tuesday 13 May 2025 19:57:08 +0000 (0:00:01.038) 0:00:20.078 *********** 2025-05-13 20:08:07.163070 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.163079 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.163088 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.163098 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.163107 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.163122 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.163132 | orchestrator | 2025-05-13 20:08:07.163142 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-05-13 20:08:07.163151 | orchestrator | Tuesday 13 May 2025 19:57:09 +0000 (0:00:00.845) 0:00:20.923 *********** 2025-05-13 20:08:07.163161 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.163239 | orchestrator | 2025-05-13 20:08:07.163349 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-05-13 20:08:07.163360 | orchestrator | Tuesday 13 May 2025 19:57:09 +0000 (0:00:00.134) 0:00:21.058 *********** 2025-05-13 20:08:07.163370 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.163380 | orchestrator | 2025-05-13 20:08:07.163390 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-13 20:08:07.163400 | orchestrator | Tuesday 13 May 2025 19:57:09 +0000 (0:00:00.189) 0:00:21.247 *********** 2025-05-13 20:08:07.163410 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.163420 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.163430 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.163440 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.163462 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.163472 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.163482 | orchestrator | 2025-05-13 20:08:07.163492 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-05-13 20:08:07.163510 | orchestrator | Tuesday 13 May 2025 19:57:10 +0000 (0:00:00.652) 0:00:21.900 *********** 2025-05-13 20:08:07.163520 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.163530 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.163540 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.163550 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.163560 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.163570 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.163580 | orchestrator | 2025-05-13 20:08:07.163590 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-05-13 20:08:07.163600 | orchestrator | Tuesday 13 May 2025 19:57:11 +0000 (0:00:00.964) 0:00:22.864 *********** 2025-05-13 20:08:07.163610 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.163619 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.163629 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.163639 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.163649 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.163659 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.163668 | orchestrator | 2025-05-13 20:08:07.163679 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-05-13 20:08:07.163689 | orchestrator | Tuesday 13 May 2025 19:57:11 +0000 (0:00:00.657) 0:00:23.522 *********** 2025-05-13 20:08:07.163699 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.163709 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.163724 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.163745 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.163770 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.163787 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.163802 | orchestrator | 2025-05-13 20:08:07.163817 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-05-13 20:08:07.163831 | orchestrator | Tuesday 13 May 2025 19:57:12 +0000 (0:00:00.896) 0:00:24.418 *********** 2025-05-13 20:08:07.163952 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.163974 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.163989 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.164042 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.164059 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.164074 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.164091 | orchestrator | 2025-05-13 20:08:07.164183 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-05-13 20:08:07.164216 | orchestrator | Tuesday 13 May 2025 19:57:13 +0000 (0:00:00.679) 0:00:25.098 *********** 2025-05-13 20:08:07.164232 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.164276 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.164294 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.164327 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.164357 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.164372 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.164410 | orchestrator | 2025-05-13 20:08:07.164427 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-13 20:08:07.164445 | orchestrator | Tuesday 13 May 2025 19:57:14 +0000 (0:00:00.837) 0:00:25.935 *********** 2025-05-13 20:08:07.164459 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.164473 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.164489 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.164574 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.164594 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.164610 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.164826 | orchestrator | 2025-05-13 20:08:07.164847 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-05-13 20:08:07.164866 | orchestrator | Tuesday 13 May 2025 19:57:14 +0000 (0:00:00.584) 0:00:26.520 *********** 2025-05-13 20:08:07.164885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.164905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.164922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.164941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.164982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8121aa-6ca9-42c7-878e-7472efa518ca', 'scsi-SQEMU_QEMU_HARDDISK_6d8121aa-6ca9-42c7-878e-7472efa518ca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8121aa-6ca9-42c7-878e-7472efa518ca-part1', 'scsi-SQEMU_QEMU_HARDDISK_6d8121aa-6ca9-42c7-878e-7472efa518ca-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8121aa-6ca9-42c7-878e-7472efa518ca-part14', 'scsi-SQEMU_QEMU_HARDDISK_6d8121aa-6ca9-42c7-878e-7472efa518ca-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8121aa-6ca9-42c7-878e-7472efa518ca-part15', 'scsi-SQEMU_QEMU_HARDDISK_6d8121aa-6ca9-42c7-878e-7472efa518ca-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8121aa-6ca9-42c7-878e-7472efa518ca-part16', 'scsi-SQEMU_QEMU_HARDDISK_6d8121aa-6ca9-42c7-878e-7472efa518ca-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:08:07.165153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-19-06-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:08:07.165173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01f96eca-3323-4c61-8f0f-6c13d6bd13ea', 'scsi-SQEMU_QEMU_HARDDISK_01f96eca-3323-4c61-8f0f-6c13d6bd13ea'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01f96eca-3323-4c61-8f0f-6c13d6bd13ea-part1', 'scsi-SQEMU_QEMU_HARDDISK_01f96eca-3323-4c61-8f0f-6c13d6bd13ea-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01f96eca-3323-4c61-8f0f-6c13d6bd13ea-part14', 'scsi-SQEMU_QEMU_HARDDISK_01f96eca-3323-4c61-8f0f-6c13d6bd13ea-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01f96eca-3323-4c61-8f0f-6c13d6bd13ea-part15', 'scsi-SQEMU_QEMU_HARDDISK_01f96eca-3323-4c61-8f0f-6c13d6bd13ea-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01f96eca-3323-4c61-8f0f-6c13d6bd13ea-part16', 'scsi-SQEMU_QEMU_HARDDISK_01f96eca-3323-4c61-8f0f-6c13d6bd13ea-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:08:07.165539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-19-06-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:08:07.165554 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.165569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165681 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.165693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ed24ab4-c68b-4a4d-ac28-b638953962bf', 'scsi-SQEMU_QEMU_HARDDISK_8ed24ab4-c68b-4a4d-ac28-b638953962bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ed24ab4-c68b-4a4d-ac28-b638953962bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_8ed24ab4-c68b-4a4d-ac28-b638953962bf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ed24ab4-c68b-4a4d-ac28-b638953962bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_8ed24ab4-c68b-4a4d-ac28-b638953962bf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ed24ab4-c68b-4a4d-ac28-b638953962bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_8ed24ab4-c68b-4a4d-ac28-b638953962bf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ed24ab4-c68b-4a4d-ac28-b638953962bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_8ed24ab4-c68b-4a4d-ac28-b638953962bf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:08:07.165734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-19-06-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:08:07.165754 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.165767 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eb14b8c1--d757--5b78--a398--3e433d34ee3e-osd--block--eb14b8c1--d757--5b78--a398--3e433d34ee3e', 'dm-uuid-LVM-rUzZXZKL8QvWDWEmhCrsMJVItcd4niAXg5NokKKGy3QkHSq9S0nIJSi2Q21T5NwR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165875 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--55d6de5b--857a--5090--90bd--6b26b006e6c2-osd--block--55d6de5b--857a--5090--90bd--6b26b006e6c2', 'dm-uuid-LVM-vBJkM2Ms9xoHjlu9Xm9OMIS3PvG9U5373VzXqktSVgwnrRKE1dB0oZToZyk5ZKn3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165909 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c7ef241c--3ce4--53e3--9962--a0236c38cab6-osd--block--c7ef241c--3ce4--53e3--9962--a0236c38cab6', 'dm-uuid-LVM-HIYs3chgx9w0QZEoLwAI7WWwTHGM5AD06WmuLuFfZnhJzmBJxQa9IZ2hR7qsn9Rt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165933 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--53409cd5--715f--5221--bc58--8adc9fe4a6bc-osd--block--53409cd5--715f--5221--bc58--8adc9fe4a6bc', 'dm-uuid-LVM-uQFKQmydpWQsLnUFa0O91r217huYWLBPpRKPNOkZYm2ddggQo0qiQ3GpdWmYmqcX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165975 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.165999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.166011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.166058 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.166075 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.166086 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.166098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.166110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.166130 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.166150 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.166162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.166174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.166186 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9e27190a--cad1--5451--a880--ae60fcff608c-osd--block--9e27190a--cad1--5451--a880--ae60fcff608c', 'dm-uuid-LVM-FrPe5ukHniNrH6lviJmTua1GloekeVZWHXqf71qIYfWnlrHYWae7nvsOEo1vSfYA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.166206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b', 'scsi-SQEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part1', 'scsi-SQEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part14', 'scsi-SQEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part15', 'scsi-SQEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part16', 'scsi-SQEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:08:07.166232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6f4317e9--8e5a--55d6--81df--460521249898-osd--block--6f4317e9--8e5a--55d6--81df--460521249898', 'dm-uuid-LVM-2b5UfzfqWpwbtNFwxAJo3rUUbWYFIFueNhzhEmmBgAXFJCQFegzZGzI75pKCFbW3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.166275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2', 'scsi-SQEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part1', 'scsi-SQEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part14', 'scsi-SQEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part15', 'scsi-SQEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part16', 'scsi-SQEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:08:07.166294 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--eb14b8c1--d757--5b78--a398--3e433d34ee3e-osd--block--eb14b8c1--d757--5b78--a398--3e433d34ee3e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eCPfZX-2Obe-Qkxq-eA0e-0CxC-TVhB-BfSZ3B', 'scsi-0QEMU_QEMU_HARDDISK_34a01356-b2ad-4692-b4fa-0e371ae7ecbd', 'scsi-SQEMU_QEMU_HARDDISK_34a01356-b2ad-4692-b4fa-0e371ae7ecbd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:08:07.166317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c7ef241c--3ce4--53e3--9962--a0236c38cab6-osd--block--c7ef241c--3ce4--53e3--9962--a0236c38cab6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KHL89F-O2YZ-U9aB-y3jM-YLBU-PA1u-5P96ej', 'scsi-0QEMU_QEMU_HARDDISK_e87b71fc-701a-46cb-bbd9-3f15f37c3043', 'scsi-SQEMU_QEMU_HARDDISK_e87b71fc-701a-46cb-bbd9-3f15f37c3043'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:08:07.166335 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.166349 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--55d6de5b--857a--5090--90bd--6b26b006e6c2-osd--block--55d6de5b--857a--5090--90bd--6b26b006e6c2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-alfQ1Y-Kvv2-D8lJ-HNk0-0GmX-PLlh-wukyi0', 'scsi-0QEMU_QEMU_HARDDISK_ca00bcd5-8e8a-4b90-8497-af6d74b86161', 'scsi-SQEMU_QEMU_HARDDISK_ca00bcd5-8e8a-4b90-8497-af6d74b86161'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:08:07.166364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.166383 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--53409cd5--715f--5221--bc58--8adc9fe4a6bc-osd--block--53409cd5--715f--5221--bc58--8adc9fe4a6bc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZdUup2-oP2G-uJlD-mDPP-VpAJ-Acbk-ji6VVF', 'scsi-0QEMU_QEMU_HARDDISK_97094a75-4993-40db-897e-adadcd017b36', 'scsi-SQEMU_QEMU_HARDDISK_97094a75-4993-40db-897e-adadcd017b36'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:08:07.166397 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_04d2f464-e449-42d7-9ceb-0224b6b42ef4', 'scsi-SQEMU_QEMU_HARDDISK_04d2f464-e449-42d7-9ceb-0224b6b42ef4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:08:07.166410 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.166436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d4a667e-1daa-4ea2-845b-5122e74908eb', 'scsi-SQEMU_QEMU_HARDDISK_9d4a667e-1daa-4ea2-845b-5122e74908eb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:08:07.166452 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-19-06-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:08:07.166465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.166478 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-19-06-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:08:07.166491 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.166503 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.166516 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.166539 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.166554 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.166575 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:08:07.166597 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af', 'scsi-SQEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part1', 'scsi-SQEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part14', 'scsi-SQEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part15', 'scsi-SQEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part16', 'scsi-SQEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:08:07.166613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9e27190a--cad1--5451--a880--ae60fcff608c-osd--block--9e27190a--cad1--5451--a880--ae60fcff608c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-i0C0RG-wQcy-1Jbz-VMJa-5NhQ-4ZiG-BdNyaC', 'scsi-0QEMU_QEMU_HARDDISK_0bd34d58-f920-45be-9e9c-4745e29ec711', 'scsi-SQEMU_QEMU_HARDDISK_0bd34d58-f920-45be-9e9c-4745e29ec711'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:08:07.166634 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6f4317e9--8e5a--55d6--81df--460521249898-osd--block--6f4317e9--8e5a--55d6--81df--460521249898'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sYEEX0-cwtG-HcvZ-EkWI-2rqr-mPns-GCjGTv', 'scsi-0QEMU_QEMU_HARDDISK_5a89f530-918e-4949-9347-1038fd288b0d', 'scsi-SQEMU_QEMU_HARDDISK_5a89f530-918e-4949-9347-1038fd288b0d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:08:07.166656 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10c33077-7b2d-46df-acf0-04e3d7859f61', 'scsi-SQEMU_QEMU_HARDDISK_10c33077-7b2d-46df-acf0-04e3d7859f61'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:08:07.166671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-19-06-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:08:07.166695 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.166710 | orchestrator | 2025-05-13 20:08:07.166724 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-05-13 20:08:07.166737 | orchestrator | Tuesday 13 May 2025 19:57:16 +0000 (0:00:01.667) 0:00:28.188 *********** 2025-05-13 20:08:07.166751 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.166765 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.166779 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.166800 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.166823 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.166837 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.166861 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.166876 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.166891 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.166911 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.166926 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.166963 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8121aa-6ca9-42c7-878e-7472efa518ca', 'scsi-SQEMU_QEMU_HARDDISK_6d8121aa-6ca9-42c7-878e-7472efa518ca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8121aa-6ca9-42c7-878e-7472efa518ca-part1', 'scsi-SQEMU_QEMU_HARDDISK_6d8121aa-6ca9-42c7-878e-7472efa518ca-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8121aa-6ca9-42c7-878e-7472efa518ca-part14', 'scsi-SQEMU_QEMU_HARDDISK_6d8121aa-6ca9-42c7-878e-7472efa518ca-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8121aa-6ca9-42c7-878e-7472efa518ca-part15', 'scsi-SQEMU_QEMU_HARDDISK_6d8121aa-6ca9-42c7-878e-7472efa518ca-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8121aa-6ca9-42c7-878e-7472efa518ca-part16', 'scsi-SQEMU_QEMU_HARDDISK_6d8121aa-6ca9-42c7-878e-7472efa518ca-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.166980 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.167000 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-19-06-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.167024 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.167039 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.167062 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.167078 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.167099 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01f96eca-3323-4c61-8f0f-6c13d6bd13ea', 'scsi-SQEMU_QEMU_HARDDISK_01f96eca-3323-4c61-8f0f-6c13d6bd13ea'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01f96eca-3323-4c61-8f0f-6c13d6bd13ea-part1', 'scsi-SQEMU_QEMU_HARDDISK_01f96eca-3323-4c61-8f0f-6c13d6bd13ea-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01f96eca-3323-4c61-8f0f-6c13d6bd13ea-part14', 'scsi-SQEMU_QEMU_HARDDISK_01f96eca-3323-4c61-8f0f-6c13d6bd13ea-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01f96eca-3323-4c61-8f0f-6c13d6bd13ea-part15', 'scsi-SQEMU_QEMU_HARDDISK_01f96eca-3323-4c61-8f0f-6c13d6bd13ea-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01f96eca-3323-4c61-8f0f-6c13d6bd13ea-part16', 'scsi-SQEMU_QEMU_HARDDISK_01f96eca-3323-4c61-8f0f-6c13d6bd13ea-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.167122 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-19-06-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.167137 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.168582 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.168626 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.168640 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.168653 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.168682 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.168696 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.168786 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.168805 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.168858 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ed24ab4-c68b-4a4d-ac28-b638953962bf', 'scsi-SQEMU_QEMU_HARDDISK_8ed24ab4-c68b-4a4d-ac28-b638953962bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ed24ab4-c68b-4a4d-ac28-b638953962bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_8ed24ab4-c68b-4a4d-ac28-b638953962bf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ed24ab4-c68b-4a4d-ac28-b638953962bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_8ed24ab4-c68b-4a4d-ac28-b638953962bf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ed24ab4-c68b-4a4d-ac28-b638953962bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_8ed24ab4-c68b-4a4d-ac28-b638953962bf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ed24ab4-c68b-4a4d-ac28-b638953962bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_8ed24ab4-c68b-4a4d-ac28-b638953962bf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.168879 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-19-06-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.168890 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.168964 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eb14b8c1--d757--5b78--a398--3e433d34ee3e-osd--block--eb14b8c1--d757--5b78--a398--3e433d34ee3e', 'dm-uuid-LVM-rUzZXZKL8QvWDWEmhCrsMJVItcd4niAXg5NokKKGy3QkHSq9S0nIJSi2Q21T5NwR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.168979 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--55d6de5b--857a--5090--90bd--6b26b006e6c2-osd--block--55d6de5b--857a--5090--90bd--6b26b006e6c2', 'dm-uuid-LVM-vBJkM2Ms9xoHjlu9Xm9OMIS3PvG9U5373VzXqktSVgwnrRKE1dB0oZToZyk5ZKn3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.168990 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169012 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169022 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169033 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.169043 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169138 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169152 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169163 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169186 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c7ef241c--3ce4--53e3--9962--a0236c38cab6-osd--block--c7ef241c--3ce4--53e3--9962--a0236c38cab6', 'dm-uuid-LVM-HIYs3chgx9w0QZEoLwAI7WWwTHGM5AD06WmuLuFfZnhJzmBJxQa9IZ2hR7qsn9Rt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169197 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169208 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--53409cd5--715f--5221--bc58--8adc9fe4a6bc-osd--block--53409cd5--715f--5221--bc58--8adc9fe4a6bc', 'dm-uuid-LVM-uQFKQmydpWQsLnUFa0O91r217huYWLBPpRKPNOkZYm2ddggQo0qiQ3GpdWmYmqcX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169307 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169330 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b', 'scsi-SQEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part1', 'scsi-SQEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part14', 'scsi-SQEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part15', 'scsi-SQEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part16', 'scsi-SQEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169350 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169383 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--eb14b8c1--d757--5b78--a398--3e433d34ee3e-osd--block--eb14b8c1--d757--5b78--a398--3e433d34ee3e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eCPfZX-2Obe-Qkxq-eA0e-0CxC-TVhB-BfSZ3B', 'scsi-0QEMU_QEMU_HARDDISK_34a01356-b2ad-4692-b4fa-0e371ae7ecbd', 'scsi-SQEMU_QEMU_HARDDISK_34a01356-b2ad-4692-b4fa-0e371ae7ecbd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169460 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169476 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--55d6de5b--857a--5090--90bd--6b26b006e6c2-osd--block--55d6de5b--857a--5090--90bd--6b26b006e6c2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-alfQ1Y-Kvv2-D8lJ-HNk0-0GmX-PLlh-wukyi0', 'scsi-0QEMU_QEMU_HARDDISK_ca00bcd5-8e8a-4b90-8497-af6d74b86161', 'scsi-SQEMU_QEMU_HARDDISK_ca00bcd5-8e8a-4b90-8497-af6d74b86161'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169501 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_04d2f464-e449-42d7-9ceb-0224b6b42ef4', 'scsi-SQEMU_QEMU_HARDDISK_04d2f464-e449-42d7-9ceb-0224b6b42ef4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169513 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169525 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-19-06-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169536 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.169614 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169631 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169644 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169664 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169682 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9e27190a--cad1--5451--a880--ae60fcff608c-osd--block--9e27190a--cad1--5451--a880--ae60fcff608c', 'dm-uuid-LVM-FrPe5ukHniNrH6lviJmTua1GloekeVZWHXqf71qIYfWnlrHYWae7nvsOEo1vSfYA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169789 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2', 'scsi-SQEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part1', 'scsi-SQEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part14', 'scsi-SQEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part15', 'scsi-SQEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part16', 'scsi-SQEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169818 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6f4317e9--8e5a--55d6--81df--460521249898-osd--block--6f4317e9--8e5a--55d6--81df--460521249898', 'dm-uuid-LVM-2b5UfzfqWpwbtNFwxAJo3rUUbWYFIFueNhzhEmmBgAXFJCQFegzZGzI75pKCFbW3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169836 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c7ef241c--3ce4--53e3--9962--a0236c38cab6-osd--block--c7ef241c--3ce4--53e3--9962--a0236c38cab6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KHL89F-O2YZ-U9aB-y3jM-YLBU-PA1u-5P96ej', 'scsi-0QEMU_QEMU_HARDDISK_e87b71fc-701a-46cb-bbd9-3f15f37c3043', 'scsi-SQEMU_QEMU_HARDDISK_e87b71fc-701a-46cb-bbd9-3f15f37c3043'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169847 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169926 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--53409cd5--715f--5221--bc58--8adc9fe4a6bc-osd--block--53409cd5--715f--5221--bc58--8adc9fe4a6bc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZdUup2-oP2G-uJlD-mDPP-VpAJ-Acbk-ji6VVF', 'scsi-0QEMU_QEMU_HARDDISK_97094a75-4993-40db-897e-adadcd017b36', 'scsi-SQEMU_QEMU_HARDDISK_97094a75-4993-40db-897e-adadcd017b36'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169941 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169959 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169980 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d4a667e-1daa-4ea2-845b-5122e74908eb', 'scsi-SQEMU_QEMU_HARDDISK_9d4a667e-1daa-4ea2-845b-5122e74908eb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.169991 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.170069 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-19-06-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.170148 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.170162 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.170175 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.170196 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.170209 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.170328 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af', 'scsi-SQEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part1', 'scsi-SQEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part14', 'scsi-SQEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part15', 'scsi-SQEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part16', 'scsi-SQEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.170349 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9e27190a--cad1--5451--a880--ae60fcff608c-osd--block--9e27190a--cad1--5451--a880--ae60fcff608c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-i0C0RG-wQcy-1Jbz-VMJa-5NhQ-4ZiG-BdNyaC', 'scsi-0QEMU_QEMU_HARDDISK_0bd34d58-f920-45be-9e9c-4745e29ec711', 'scsi-SQEMU_QEMU_HARDDISK_0bd34d58-f920-45be-9e9c-4745e29ec711'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.170375 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6f4317e9--8e5a--55d6--81df--460521249898-osd--block--6f4317e9--8e5a--55d6--81df--460521249898'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sYEEX0-cwtG-HcvZ-EkWI-2rqr-mPns-GCjGTv', 'scsi-0QEMU_QEMU_HARDDISK_5a89f530-918e-4949-9347-1038fd288b0d', 'scsi-SQEMU_QEMU_HARDDISK_5a89f530-918e-4949-9347-1038fd288b0d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.170388 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10c33077-7b2d-46df-acf0-04e3d7859f61', 'scsi-SQEMU_QEMU_HARDDISK_10c33077-7b2d-46df-acf0-04e3d7859f61'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.170400 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-19-06-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:08:07.170413 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.170425 | orchestrator | 2025-05-13 20:08:07.170436 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-05-13 20:08:07.170448 | orchestrator | Tuesday 13 May 2025 19:57:18 +0000 (0:00:01.500) 0:00:29.688 *********** 2025-05-13 20:08:07.170460 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.170472 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.170483 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.170552 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.170566 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.170585 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.170595 | orchestrator | 2025-05-13 20:08:07.170606 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-05-13 20:08:07.170617 | orchestrator | Tuesday 13 May 2025 19:57:19 +0000 (0:00:01.590) 0:00:31.278 *********** 2025-05-13 20:08:07.170629 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.170640 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.170651 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.170662 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.170672 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.170683 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.170693 | orchestrator | 2025-05-13 20:08:07.170704 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-13 20:08:07.170715 | orchestrator | Tuesday 13 May 2025 19:57:20 +0000 (0:00:00.767) 0:00:32.046 *********** 2025-05-13 20:08:07.170725 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.170735 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.170746 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.170755 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.170766 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.170776 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.170787 | orchestrator | 2025-05-13 20:08:07.170815 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-13 20:08:07.170825 | orchestrator | Tuesday 13 May 2025 19:57:21 +0000 (0:00:00.748) 0:00:32.794 *********** 2025-05-13 20:08:07.170835 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.170846 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.170856 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.170866 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.170877 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.170888 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.170899 | orchestrator | 2025-05-13 20:08:07.170909 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-13 20:08:07.170919 | orchestrator | Tuesday 13 May 2025 19:57:21 +0000 (0:00:00.506) 0:00:33.301 *********** 2025-05-13 20:08:07.170929 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.170939 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.170949 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.170959 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.170970 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.170981 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.170992 | orchestrator | 2025-05-13 20:08:07.171002 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-13 20:08:07.171012 | orchestrator | Tuesday 13 May 2025 19:57:22 +0000 (0:00:01.044) 0:00:34.345 *********** 2025-05-13 20:08:07.171023 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.171034 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.171045 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.171056 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.171067 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.171076 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.171086 | orchestrator | 2025-05-13 20:08:07.171097 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-05-13 20:08:07.171115 | orchestrator | Tuesday 13 May 2025 19:57:24 +0000 (0:00:01.231) 0:00:35.577 *********** 2025-05-13 20:08:07.171127 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-13 20:08:07.171138 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-05-13 20:08:07.171150 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-13 20:08:07.171162 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-05-13 20:08:07.171172 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-13 20:08:07.171183 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-05-13 20:08:07.171204 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-13 20:08:07.171217 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-05-13 20:08:07.171227 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-13 20:08:07.171237 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-13 20:08:07.171269 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-13 20:08:07.171280 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-05-13 20:08:07.171292 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-13 20:08:07.171303 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-13 20:08:07.171316 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-13 20:08:07.171327 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-05-13 20:08:07.171339 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-13 20:08:07.171349 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-13 20:08:07.171360 | orchestrator | 2025-05-13 20:08:07.171372 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-05-13 20:08:07.171385 | orchestrator | Tuesday 13 May 2025 19:57:27 +0000 (0:00:03.320) 0:00:38.897 *********** 2025-05-13 20:08:07.171396 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-13 20:08:07.171407 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-13 20:08:07.171418 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-13 20:08:07.171428 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.171439 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-13 20:08:07.171449 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-13 20:08:07.171461 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-13 20:08:07.171473 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.171483 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-13 20:08:07.171494 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-13 20:08:07.171505 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-13 20:08:07.171516 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.171572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-13 20:08:07.171584 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-13 20:08:07.171597 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-13 20:08:07.171607 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-13 20:08:07.171616 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.171627 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-13 20:08:07.171637 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-13 20:08:07.171647 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-13 20:08:07.171657 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.171668 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-13 20:08:07.171680 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-13 20:08:07.171691 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.171701 | orchestrator | 2025-05-13 20:08:07.171711 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-05-13 20:08:07.171722 | orchestrator | Tuesday 13 May 2025 19:57:27 +0000 (0:00:00.517) 0:00:39.414 *********** 2025-05-13 20:08:07.171733 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.171744 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.171755 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.171766 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.171776 | orchestrator | 2025-05-13 20:08:07.171787 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-13 20:08:07.171808 | orchestrator | Tuesday 13 May 2025 19:57:29 +0000 (0:00:01.301) 0:00:40.716 *********** 2025-05-13 20:08:07.171819 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.171829 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.171838 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.171848 | orchestrator | 2025-05-13 20:08:07.171858 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-13 20:08:07.171868 | orchestrator | Tuesday 13 May 2025 19:57:29 +0000 (0:00:00.331) 0:00:41.047 *********** 2025-05-13 20:08:07.171878 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.171889 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.171899 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.171909 | orchestrator | 2025-05-13 20:08:07.171919 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-13 20:08:07.171930 | orchestrator | Tuesday 13 May 2025 19:57:30 +0000 (0:00:00.567) 0:00:41.615 *********** 2025-05-13 20:08:07.171940 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.171950 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.171960 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.171971 | orchestrator | 2025-05-13 20:08:07.171981 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-05-13 20:08:07.171992 | orchestrator | Tuesday 13 May 2025 19:57:30 +0000 (0:00:00.575) 0:00:42.190 *********** 2025-05-13 20:08:07.172003 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.172021 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.172031 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.172041 | orchestrator | 2025-05-13 20:08:07.172052 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-05-13 20:08:07.172063 | orchestrator | Tuesday 13 May 2025 19:57:31 +0000 (0:00:00.903) 0:00:43.093 *********** 2025-05-13 20:08:07.172074 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 20:08:07.172085 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 20:08:07.172096 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 20:08:07.172107 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.172117 | orchestrator | 2025-05-13 20:08:07.172129 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-13 20:08:07.172140 | orchestrator | Tuesday 13 May 2025 19:57:32 +0000 (0:00:00.624) 0:00:43.717 *********** 2025-05-13 20:08:07.172152 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 20:08:07.172162 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 20:08:07.172173 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 20:08:07.172183 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.172194 | orchestrator | 2025-05-13 20:08:07.172204 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-13 20:08:07.172214 | orchestrator | Tuesday 13 May 2025 19:57:32 +0000 (0:00:00.552) 0:00:44.270 *********** 2025-05-13 20:08:07.172226 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 20:08:07.172237 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 20:08:07.172269 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 20:08:07.172281 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.172291 | orchestrator | 2025-05-13 20:08:07.172302 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-05-13 20:08:07.172311 | orchestrator | Tuesday 13 May 2025 19:57:33 +0000 (0:00:00.671) 0:00:44.942 *********** 2025-05-13 20:08:07.172321 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.172332 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.172344 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.172354 | orchestrator | 2025-05-13 20:08:07.172364 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-05-13 20:08:07.172374 | orchestrator | Tuesday 13 May 2025 19:57:33 +0000 (0:00:00.627) 0:00:45.569 *********** 2025-05-13 20:08:07.172393 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-13 20:08:07.172404 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-13 20:08:07.172415 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-13 20:08:07.172426 | orchestrator | 2025-05-13 20:08:07.172436 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-05-13 20:08:07.172447 | orchestrator | Tuesday 13 May 2025 19:57:35 +0000 (0:00:01.034) 0:00:46.604 *********** 2025-05-13 20:08:07.172504 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-13 20:08:07.172518 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-13 20:08:07.172531 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-13 20:08:07.172542 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-13 20:08:07.172552 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-13 20:08:07.172564 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-13 20:08:07.172575 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-13 20:08:07.172585 | orchestrator | 2025-05-13 20:08:07.172596 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-05-13 20:08:07.172607 | orchestrator | Tuesday 13 May 2025 19:57:35 +0000 (0:00:00.934) 0:00:47.538 *********** 2025-05-13 20:08:07.172617 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-13 20:08:07.172628 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-13 20:08:07.172639 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-13 20:08:07.172650 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-13 20:08:07.172661 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-13 20:08:07.172671 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-13 20:08:07.172682 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-13 20:08:07.172693 | orchestrator | 2025-05-13 20:08:07.172704 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-13 20:08:07.172717 | orchestrator | Tuesday 13 May 2025 19:57:37 +0000 (0:00:01.741) 0:00:49.280 *********** 2025-05-13 20:08:07.172729 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.172741 | orchestrator | 2025-05-13 20:08:07.172752 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-13 20:08:07.172765 | orchestrator | Tuesday 13 May 2025 19:57:39 +0000 (0:00:01.416) 0:00:50.697 *********** 2025-05-13 20:08:07.172775 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.172787 | orchestrator | 2025-05-13 20:08:07.172798 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-13 20:08:07.172816 | orchestrator | Tuesday 13 May 2025 19:57:40 +0000 (0:00:01.742) 0:00:52.440 *********** 2025-05-13 20:08:07.172828 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.172839 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.172849 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.172860 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.172871 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.172881 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.172892 | orchestrator | 2025-05-13 20:08:07.172904 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-13 20:08:07.172923 | orchestrator | Tuesday 13 May 2025 19:57:41 +0000 (0:00:00.912) 0:00:53.352 *********** 2025-05-13 20:08:07.172936 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.172948 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.172961 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.172972 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.172983 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.172994 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.173005 | orchestrator | 2025-05-13 20:08:07.173016 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-13 20:08:07.173026 | orchestrator | Tuesday 13 May 2025 19:57:43 +0000 (0:00:01.514) 0:00:54.866 *********** 2025-05-13 20:08:07.173038 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.173049 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.173060 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.173072 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.173083 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.173095 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.173107 | orchestrator | 2025-05-13 20:08:07.173119 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-13 20:08:07.173130 | orchestrator | Tuesday 13 May 2025 19:57:44 +0000 (0:00:01.314) 0:00:56.181 *********** 2025-05-13 20:08:07.173142 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.173153 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.173163 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.173175 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.173187 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.173199 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.173211 | orchestrator | 2025-05-13 20:08:07.173224 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-13 20:08:07.173236 | orchestrator | Tuesday 13 May 2025 19:57:45 +0000 (0:00:01.279) 0:00:57.460 *********** 2025-05-13 20:08:07.173311 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.173326 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.173339 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.173351 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.173363 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.173375 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.173387 | orchestrator | 2025-05-13 20:08:07.173399 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-13 20:08:07.173411 | orchestrator | Tuesday 13 May 2025 19:57:46 +0000 (0:00:00.963) 0:00:58.424 *********** 2025-05-13 20:08:07.173468 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.173482 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.173495 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.173507 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.173520 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.173532 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.173544 | orchestrator | 2025-05-13 20:08:07.173557 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-13 20:08:07.173569 | orchestrator | Tuesday 13 May 2025 19:57:47 +0000 (0:00:01.145) 0:00:59.570 *********** 2025-05-13 20:08:07.173581 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.173593 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.173605 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.173618 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.173630 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.173642 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.173655 | orchestrator | 2025-05-13 20:08:07.173667 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-13 20:08:07.173679 | orchestrator | Tuesday 13 May 2025 19:57:49 +0000 (0:00:01.832) 0:01:01.402 *********** 2025-05-13 20:08:07.173692 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.173704 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.173716 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.173738 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.173751 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.173763 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.173775 | orchestrator | 2025-05-13 20:08:07.173786 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-13 20:08:07.173797 | orchestrator | Tuesday 13 May 2025 19:57:51 +0000 (0:00:01.941) 0:01:03.344 *********** 2025-05-13 20:08:07.173809 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.173822 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.173833 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.173844 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.173855 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.173866 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.173877 | orchestrator | 2025-05-13 20:08:07.173889 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-13 20:08:07.173900 | orchestrator | Tuesday 13 May 2025 19:57:53 +0000 (0:00:02.217) 0:01:05.561 *********** 2025-05-13 20:08:07.173910 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.173920 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.173931 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.173943 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.173955 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.173966 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.173978 | orchestrator | 2025-05-13 20:08:07.173989 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-13 20:08:07.174001 | orchestrator | Tuesday 13 May 2025 19:57:54 +0000 (0:00:00.901) 0:01:06.463 *********** 2025-05-13 20:08:07.174012 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.174054 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.174065 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.174076 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.174088 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.174099 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.174110 | orchestrator | 2025-05-13 20:08:07.174122 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-13 20:08:07.174139 | orchestrator | Tuesday 13 May 2025 19:57:56 +0000 (0:00:01.154) 0:01:07.618 *********** 2025-05-13 20:08:07.174151 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.174162 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.174174 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.174185 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.174196 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.174208 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.174219 | orchestrator | 2025-05-13 20:08:07.174231 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-13 20:08:07.174242 | orchestrator | Tuesday 13 May 2025 19:57:56 +0000 (0:00:00.650) 0:01:08.269 *********** 2025-05-13 20:08:07.174273 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.174284 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.174295 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.174306 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.174317 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.174329 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.174340 | orchestrator | 2025-05-13 20:08:07.174351 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-13 20:08:07.174362 | orchestrator | Tuesday 13 May 2025 19:57:57 +0000 (0:00:01.125) 0:01:09.395 *********** 2025-05-13 20:08:07.174373 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.174384 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.174395 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.174407 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.174418 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.174429 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.174440 | orchestrator | 2025-05-13 20:08:07.174451 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-13 20:08:07.174476 | orchestrator | Tuesday 13 May 2025 19:57:58 +0000 (0:00:00.762) 0:01:10.157 *********** 2025-05-13 20:08:07.174487 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.174498 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.174509 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.174520 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.174531 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.174542 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.174553 | orchestrator | 2025-05-13 20:08:07.174564 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-13 20:08:07.174575 | orchestrator | Tuesday 13 May 2025 19:57:59 +0000 (0:00:00.866) 0:01:11.024 *********** 2025-05-13 20:08:07.174586 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.174596 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.174607 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.174618 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.174629 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.174640 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.174651 | orchestrator | 2025-05-13 20:08:07.174663 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-13 20:08:07.174714 | orchestrator | Tuesday 13 May 2025 19:58:00 +0000 (0:00:00.720) 0:01:11.744 *********** 2025-05-13 20:08:07.174728 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.174738 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.174748 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.174758 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.174769 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.174779 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.174789 | orchestrator | 2025-05-13 20:08:07.174800 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-13 20:08:07.174809 | orchestrator | Tuesday 13 May 2025 19:58:01 +0000 (0:00:01.458) 0:01:13.203 *********** 2025-05-13 20:08:07.174819 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.174828 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.174838 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.174847 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.174857 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.174867 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.174877 | orchestrator | 2025-05-13 20:08:07.174886 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-13 20:08:07.174897 | orchestrator | Tuesday 13 May 2025 19:58:02 +0000 (0:00:01.269) 0:01:14.472 *********** 2025-05-13 20:08:07.174908 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.174919 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.174929 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.174940 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.174950 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.174961 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.174970 | orchestrator | 2025-05-13 20:08:07.174979 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-05-13 20:08:07.174989 | orchestrator | Tuesday 13 May 2025 19:58:04 +0000 (0:00:01.863) 0:01:16.336 *********** 2025-05-13 20:08:07.175000 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.175011 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.175021 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.175032 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.175042 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.175051 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.175061 | orchestrator | 2025-05-13 20:08:07.175071 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-05-13 20:08:07.175081 | orchestrator | Tuesday 13 May 2025 19:58:08 +0000 (0:00:03.515) 0:01:19.851 *********** 2025-05-13 20:08:07.175091 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.175100 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.175120 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.175131 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.175140 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.175150 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.175160 | orchestrator | 2025-05-13 20:08:07.175171 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-05-13 20:08:07.175181 | orchestrator | Tuesday 13 May 2025 19:58:10 +0000 (0:00:02.395) 0:01:22.247 *********** 2025-05-13 20:08:07.175193 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.175205 | orchestrator | 2025-05-13 20:08:07.175215 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-05-13 20:08:07.175234 | orchestrator | Tuesday 13 May 2025 19:58:12 +0000 (0:00:01.588) 0:01:23.836 *********** 2025-05-13 20:08:07.175302 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.175317 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.175327 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.175338 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.175349 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.175359 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.175371 | orchestrator | 2025-05-13 20:08:07.175382 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-05-13 20:08:07.175393 | orchestrator | Tuesday 13 May 2025 19:58:13 +0000 (0:00:01.200) 0:01:25.036 *********** 2025-05-13 20:08:07.175404 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.175415 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.175427 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.175437 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.175448 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.175459 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.175470 | orchestrator | 2025-05-13 20:08:07.175481 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-05-13 20:08:07.175492 | orchestrator | Tuesday 13 May 2025 19:58:14 +0000 (0:00:01.062) 0:01:26.099 *********** 2025-05-13 20:08:07.175503 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-13 20:08:07.175515 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-13 20:08:07.175525 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-13 20:08:07.175537 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-13 20:08:07.175548 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-13 20:08:07.175560 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-13 20:08:07.175571 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-13 20:08:07.175581 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-13 20:08:07.175591 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-13 20:08:07.175601 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-13 20:08:07.175612 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-13 20:08:07.175623 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-13 20:08:07.175634 | orchestrator | 2025-05-13 20:08:07.175703 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-05-13 20:08:07.175717 | orchestrator | Tuesday 13 May 2025 19:58:17 +0000 (0:00:02.709) 0:01:28.808 *********** 2025-05-13 20:08:07.175728 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.175738 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.175758 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.175769 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.175779 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.175789 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.175799 | orchestrator | 2025-05-13 20:08:07.175810 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-05-13 20:08:07.175821 | orchestrator | Tuesday 13 May 2025 19:58:18 +0000 (0:00:01.352) 0:01:30.161 *********** 2025-05-13 20:08:07.175832 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.175841 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.175851 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.175860 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.175869 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.175878 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.175886 | orchestrator | 2025-05-13 20:08:07.175895 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-05-13 20:08:07.175905 | orchestrator | Tuesday 13 May 2025 19:58:19 +0000 (0:00:01.396) 0:01:31.557 *********** 2025-05-13 20:08:07.175915 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.175925 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.175935 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.175945 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.175955 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.175965 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.175975 | orchestrator | 2025-05-13 20:08:07.175986 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-05-13 20:08:07.175996 | orchestrator | Tuesday 13 May 2025 19:58:20 +0000 (0:00:00.830) 0:01:32.388 *********** 2025-05-13 20:08:07.176006 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.176016 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.176026 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.176036 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.176046 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.176056 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.176067 | orchestrator | 2025-05-13 20:08:07.176077 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-05-13 20:08:07.176087 | orchestrator | Tuesday 13 May 2025 19:58:21 +0000 (0:00:01.095) 0:01:33.483 *********** 2025-05-13 20:08:07.176098 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.176109 | orchestrator | 2025-05-13 20:08:07.176119 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-05-13 20:08:07.176129 | orchestrator | Tuesday 13 May 2025 19:58:23 +0000 (0:00:01.798) 0:01:35.282 *********** 2025-05-13 20:08:07.176139 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.176150 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.176160 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.176176 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.176186 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.176196 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.176206 | orchestrator | 2025-05-13 20:08:07.176217 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-05-13 20:08:07.176227 | orchestrator | Tuesday 13 May 2025 19:59:40 +0000 (0:01:16.295) 0:02:51.578 *********** 2025-05-13 20:08:07.176237 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-13 20:08:07.176264 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-13 20:08:07.176276 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-13 20:08:07.176286 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.176296 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-13 20:08:07.176307 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-13 20:08:07.176325 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-13 20:08:07.176335 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.176345 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-13 20:08:07.176355 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-13 20:08:07.176365 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-13 20:08:07.176375 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.176385 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-13 20:08:07.176395 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-13 20:08:07.176405 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-13 20:08:07.176415 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.176424 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-13 20:08:07.176432 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-13 20:08:07.176440 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-13 20:08:07.176448 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.176456 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-13 20:08:07.176465 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-13 20:08:07.176473 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-13 20:08:07.176517 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.176526 | orchestrator | 2025-05-13 20:08:07.176534 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-05-13 20:08:07.176542 | orchestrator | Tuesday 13 May 2025 19:59:40 +0000 (0:00:00.846) 0:02:52.425 *********** 2025-05-13 20:08:07.176550 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.176559 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.176567 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.176576 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.176584 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.176594 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.176603 | orchestrator | 2025-05-13 20:08:07.176612 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-05-13 20:08:07.176621 | orchestrator | Tuesday 13 May 2025 19:59:41 +0000 (0:00:00.565) 0:02:52.991 *********** 2025-05-13 20:08:07.176631 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.176640 | orchestrator | 2025-05-13 20:08:07.176650 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-05-13 20:08:07.176660 | orchestrator | Tuesday 13 May 2025 19:59:41 +0000 (0:00:00.136) 0:02:53.127 *********** 2025-05-13 20:08:07.176669 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.176678 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.176687 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.176697 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.176705 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.176714 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.176722 | orchestrator | 2025-05-13 20:08:07.176731 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-05-13 20:08:07.176739 | orchestrator | Tuesday 13 May 2025 19:59:42 +0000 (0:00:00.793) 0:02:53.920 *********** 2025-05-13 20:08:07.176749 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.176757 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.176769 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.176779 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.176787 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.176796 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.176822 | orchestrator | 2025-05-13 20:08:07.176832 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-05-13 20:08:07.176843 | orchestrator | Tuesday 13 May 2025 19:59:42 +0000 (0:00:00.617) 0:02:54.538 *********** 2025-05-13 20:08:07.176853 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.176864 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.176875 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.176885 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.176897 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.176907 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.176918 | orchestrator | 2025-05-13 20:08:07.176928 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-05-13 20:08:07.176939 | orchestrator | Tuesday 13 May 2025 19:59:43 +0000 (0:00:00.784) 0:02:55.323 *********** 2025-05-13 20:08:07.176948 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.176958 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.176969 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.176980 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.176992 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.177003 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.177013 | orchestrator | 2025-05-13 20:08:07.177030 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-05-13 20:08:07.177041 | orchestrator | Tuesday 13 May 2025 19:59:46 +0000 (0:00:02.358) 0:02:57.681 *********** 2025-05-13 20:08:07.177052 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.177061 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.177072 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.177082 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.177094 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.177107 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.177119 | orchestrator | 2025-05-13 20:08:07.177131 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-05-13 20:08:07.177140 | orchestrator | Tuesday 13 May 2025 19:59:46 +0000 (0:00:00.755) 0:02:58.437 *********** 2025-05-13 20:08:07.177151 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.177162 | orchestrator | 2025-05-13 20:08:07.177172 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-05-13 20:08:07.177181 | orchestrator | Tuesday 13 May 2025 19:59:47 +0000 (0:00:00.979) 0:02:59.416 *********** 2025-05-13 20:08:07.177190 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.177200 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.177210 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.177220 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.177231 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.177241 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.177270 | orchestrator | 2025-05-13 20:08:07.177281 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-05-13 20:08:07.177290 | orchestrator | Tuesday 13 May 2025 19:59:48 +0000 (0:00:00.617) 0:03:00.034 *********** 2025-05-13 20:08:07.177299 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.177307 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.177316 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.177324 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.177333 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.177341 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.177349 | orchestrator | 2025-05-13 20:08:07.177358 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-05-13 20:08:07.177367 | orchestrator | Tuesday 13 May 2025 19:59:49 +0000 (0:00:00.768) 0:03:00.803 *********** 2025-05-13 20:08:07.177376 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.177385 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.177395 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.177415 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.177427 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.177439 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.177449 | orchestrator | 2025-05-13 20:08:07.177459 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-05-13 20:08:07.177522 | orchestrator | Tuesday 13 May 2025 19:59:49 +0000 (0:00:00.543) 0:03:01.347 *********** 2025-05-13 20:08:07.177534 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.177544 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.177552 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.177562 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.177570 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.177579 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.177588 | orchestrator | 2025-05-13 20:08:07.177597 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-05-13 20:08:07.177607 | orchestrator | Tuesday 13 May 2025 19:59:50 +0000 (0:00:00.716) 0:03:02.063 *********** 2025-05-13 20:08:07.177616 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.177624 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.177633 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.177641 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.177650 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.177658 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.177666 | orchestrator | 2025-05-13 20:08:07.177675 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-05-13 20:08:07.177684 | orchestrator | Tuesday 13 May 2025 19:59:51 +0000 (0:00:00.793) 0:03:02.857 *********** 2025-05-13 20:08:07.177695 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.177704 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.177714 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.177723 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.177732 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.177741 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.177750 | orchestrator | 2025-05-13 20:08:07.177759 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-05-13 20:08:07.177768 | orchestrator | Tuesday 13 May 2025 19:59:52 +0000 (0:00:00.831) 0:03:03.688 *********** 2025-05-13 20:08:07.177777 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.177785 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.177794 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.177803 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.177812 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.177821 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.177828 | orchestrator | 2025-05-13 20:08:07.177837 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-05-13 20:08:07.177846 | orchestrator | Tuesday 13 May 2025 19:59:52 +0000 (0:00:00.661) 0:03:04.350 *********** 2025-05-13 20:08:07.177854 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.177863 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.177871 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.177879 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.177888 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.177896 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.177903 | orchestrator | 2025-05-13 20:08:07.177912 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-05-13 20:08:07.177920 | orchestrator | Tuesday 13 May 2025 19:59:53 +0000 (0:00:00.652) 0:03:05.003 *********** 2025-05-13 20:08:07.177929 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.177938 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.177947 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.177963 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.177973 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.177981 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.177998 | orchestrator | 2025-05-13 20:08:07.178007 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-05-13 20:08:07.178046 | orchestrator | Tuesday 13 May 2025 19:59:54 +0000 (0:00:01.206) 0:03:06.210 *********** 2025-05-13 20:08:07.178059 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.178069 | orchestrator | 2025-05-13 20:08:07.178079 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-05-13 20:08:07.178088 | orchestrator | Tuesday 13 May 2025 19:59:55 +0000 (0:00:01.268) 0:03:07.479 *********** 2025-05-13 20:08:07.178097 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-05-13 20:08:07.178106 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-05-13 20:08:07.178115 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-05-13 20:08:07.178125 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-05-13 20:08:07.178134 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-05-13 20:08:07.178143 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-05-13 20:08:07.178153 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-05-13 20:08:07.178163 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-05-13 20:08:07.178172 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-05-13 20:08:07.178182 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-05-13 20:08:07.178192 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-05-13 20:08:07.178201 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-05-13 20:08:07.178210 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-05-13 20:08:07.178219 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-05-13 20:08:07.178229 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-05-13 20:08:07.178238 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-05-13 20:08:07.178269 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-05-13 20:08:07.178279 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-05-13 20:08:07.178287 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-05-13 20:08:07.178297 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-05-13 20:08:07.178306 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-05-13 20:08:07.178370 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-05-13 20:08:07.178384 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-05-13 20:08:07.178394 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-05-13 20:08:07.178404 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-05-13 20:08:07.178414 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-05-13 20:08:07.178423 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-05-13 20:08:07.178431 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-05-13 20:08:07.178440 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-05-13 20:08:07.178449 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-05-13 20:08:07.178458 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-05-13 20:08:07.178467 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-05-13 20:08:07.178475 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-05-13 20:08:07.178484 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-05-13 20:08:07.178493 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-05-13 20:08:07.178502 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-05-13 20:08:07.178511 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-05-13 20:08:07.178531 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-05-13 20:08:07.178540 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-05-13 20:08:07.178549 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-05-13 20:08:07.178558 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-05-13 20:08:07.178567 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-05-13 20:08:07.178576 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-05-13 20:08:07.178586 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-05-13 20:08:07.178595 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-05-13 20:08:07.178604 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-13 20:08:07.178613 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-05-13 20:08:07.178624 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-13 20:08:07.178635 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-05-13 20:08:07.178645 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-05-13 20:08:07.178657 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-13 20:08:07.178667 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-13 20:08:07.178677 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-13 20:08:07.178694 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-13 20:08:07.178704 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-13 20:08:07.178714 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-13 20:08:07.178724 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-13 20:08:07.178733 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-13 20:08:07.178742 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-13 20:08:07.178750 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-13 20:08:07.178759 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-13 20:08:07.178767 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-13 20:08:07.178776 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-13 20:08:07.178784 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-13 20:08:07.178792 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-13 20:08:07.178802 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-13 20:08:07.178810 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-13 20:08:07.178820 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-13 20:08:07.178829 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-13 20:08:07.178838 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-13 20:08:07.178848 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-13 20:08:07.178858 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-13 20:08:07.178867 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-13 20:08:07.178876 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-13 20:08:07.178885 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-13 20:08:07.178894 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-13 20:08:07.178903 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-13 20:08:07.178919 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-13 20:08:07.178928 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-13 20:08:07.178984 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-13 20:08:07.178996 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-05-13 20:08:07.179005 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-13 20:08:07.179016 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-05-13 20:08:07.179025 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-13 20:08:07.179034 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-05-13 20:08:07.179043 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-05-13 20:08:07.179052 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-13 20:08:07.179060 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-05-13 20:08:07.179070 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-05-13 20:08:07.179079 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-13 20:08:07.179088 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-05-13 20:08:07.179097 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-05-13 20:08:07.179106 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-05-13 20:08:07.179114 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-05-13 20:08:07.179123 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-05-13 20:08:07.179131 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-05-13 20:08:07.179140 | orchestrator | 2025-05-13 20:08:07.179150 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-13 20:08:07.179157 | orchestrator | Tuesday 13 May 2025 20:00:02 +0000 (0:00:06.711) 0:03:14.190 *********** 2025-05-13 20:08:07.179166 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.179174 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.179183 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.179192 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.179200 | orchestrator | 2025-05-13 20:08:07.179208 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-05-13 20:08:07.179216 | orchestrator | Tuesday 13 May 2025 20:00:03 +0000 (0:00:01.129) 0:03:15.319 *********** 2025-05-13 20:08:07.179224 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-13 20:08:07.179233 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-13 20:08:07.179242 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-13 20:08:07.179316 | orchestrator | 2025-05-13 20:08:07.179336 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-05-13 20:08:07.179346 | orchestrator | Tuesday 13 May 2025 20:00:04 +0000 (0:00:00.722) 0:03:16.041 *********** 2025-05-13 20:08:07.179356 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-13 20:08:07.179367 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-13 20:08:07.179377 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-13 20:08:07.179387 | orchestrator | 2025-05-13 20:08:07.179397 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-05-13 20:08:07.179423 | orchestrator | Tuesday 13 May 2025 20:00:05 +0000 (0:00:01.419) 0:03:17.461 *********** 2025-05-13 20:08:07.179435 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.179445 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.179455 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.179466 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.179475 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.179485 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.179495 | orchestrator | 2025-05-13 20:08:07.179505 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-05-13 20:08:07.179514 | orchestrator | Tuesday 13 May 2025 20:00:06 +0000 (0:00:00.645) 0:03:18.106 *********** 2025-05-13 20:08:07.179522 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.179531 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.179539 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.179547 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.179555 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.179564 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.179573 | orchestrator | 2025-05-13 20:08:07.179582 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-05-13 20:08:07.179592 | orchestrator | Tuesday 13 May 2025 20:00:07 +0000 (0:00:01.188) 0:03:19.295 *********** 2025-05-13 20:08:07.179601 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.179609 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.179618 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.179627 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.179636 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.179646 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.179655 | orchestrator | 2025-05-13 20:08:07.179665 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-05-13 20:08:07.179674 | orchestrator | Tuesday 13 May 2025 20:00:08 +0000 (0:00:00.513) 0:03:19.809 *********** 2025-05-13 20:08:07.179683 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.179691 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.179755 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.179768 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.179777 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.179785 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.179793 | orchestrator | 2025-05-13 20:08:07.179802 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-05-13 20:08:07.179810 | orchestrator | Tuesday 13 May 2025 20:00:09 +0000 (0:00:00.800) 0:03:20.609 *********** 2025-05-13 20:08:07.179819 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.179828 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.179836 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.179844 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.179852 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.179860 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.179868 | orchestrator | 2025-05-13 20:08:07.179876 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-13 20:08:07.179884 | orchestrator | Tuesday 13 May 2025 20:00:09 +0000 (0:00:00.564) 0:03:21.174 *********** 2025-05-13 20:08:07.179892 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.179900 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.179909 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.179917 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.179925 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.179933 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.179941 | orchestrator | 2025-05-13 20:08:07.179949 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-13 20:08:07.179957 | orchestrator | Tuesday 13 May 2025 20:00:10 +0000 (0:00:00.688) 0:03:21.863 *********** 2025-05-13 20:08:07.179976 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.179985 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.179993 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.180002 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.180011 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.180019 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.180028 | orchestrator | 2025-05-13 20:08:07.180036 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-13 20:08:07.180045 | orchestrator | Tuesday 13 May 2025 20:00:10 +0000 (0:00:00.660) 0:03:22.524 *********** 2025-05-13 20:08:07.180054 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.180063 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.180072 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.180080 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.180089 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.180098 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.180107 | orchestrator | 2025-05-13 20:08:07.180115 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-13 20:08:07.180124 | orchestrator | Tuesday 13 May 2025 20:00:11 +0000 (0:00:00.696) 0:03:23.221 *********** 2025-05-13 20:08:07.180132 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.180141 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.180149 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.180157 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.180166 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.180175 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.180183 | orchestrator | 2025-05-13 20:08:07.180197 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-05-13 20:08:07.180204 | orchestrator | Tuesday 13 May 2025 20:00:15 +0000 (0:00:03.650) 0:03:26.871 *********** 2025-05-13 20:08:07.180212 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.180221 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.180229 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.180237 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.180244 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.180280 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.180288 | orchestrator | 2025-05-13 20:08:07.180297 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-05-13 20:08:07.180304 | orchestrator | Tuesday 13 May 2025 20:00:16 +0000 (0:00:01.016) 0:03:27.887 *********** 2025-05-13 20:08:07.180316 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.180324 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.180332 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.180341 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.180349 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.180358 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.180366 | orchestrator | 2025-05-13 20:08:07.180374 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-05-13 20:08:07.180383 | orchestrator | Tuesday 13 May 2025 20:00:17 +0000 (0:00:00.891) 0:03:28.779 *********** 2025-05-13 20:08:07.180392 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.180401 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.180410 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.180418 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.180426 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.180434 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.180444 | orchestrator | 2025-05-13 20:08:07.180453 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-05-13 20:08:07.180461 | orchestrator | Tuesday 13 May 2025 20:00:18 +0000 (0:00:01.115) 0:03:29.895 *********** 2025-05-13 20:08:07.180470 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.180478 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.180487 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.180505 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-13 20:08:07.180515 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-13 20:08:07.180522 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-13 20:08:07.180529 | orchestrator | 2025-05-13 20:08:07.180536 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-05-13 20:08:07.180593 | orchestrator | Tuesday 13 May 2025 20:00:19 +0000 (0:00:00.867) 0:03:30.763 *********** 2025-05-13 20:08:07.180603 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.180611 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.180620 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.180632 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-05-13 20:08:07.180643 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-05-13 20:08:07.180654 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.180663 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-05-13 20:08:07.180672 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-05-13 20:08:07.180680 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.180689 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-05-13 20:08:07.180698 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-05-13 20:08:07.180713 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.180722 | orchestrator | 2025-05-13 20:08:07.180731 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-05-13 20:08:07.180739 | orchestrator | Tuesday 13 May 2025 20:00:19 +0000 (0:00:00.745) 0:03:31.508 *********** 2025-05-13 20:08:07.180747 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.180755 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.180764 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.180772 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.180779 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.180787 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.180794 | orchestrator | 2025-05-13 20:08:07.180802 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-05-13 20:08:07.180810 | orchestrator | Tuesday 13 May 2025 20:00:20 +0000 (0:00:00.478) 0:03:31.987 *********** 2025-05-13 20:08:07.180825 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.180833 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.180841 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.180849 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.180857 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.180866 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.180874 | orchestrator | 2025-05-13 20:08:07.180882 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-13 20:08:07.180891 | orchestrator | Tuesday 13 May 2025 20:00:21 +0000 (0:00:00.600) 0:03:32.588 *********** 2025-05-13 20:08:07.180899 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.180907 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.180915 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.180924 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.180932 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.180941 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.180949 | orchestrator | 2025-05-13 20:08:07.180957 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-13 20:08:07.180965 | orchestrator | Tuesday 13 May 2025 20:00:21 +0000 (0:00:00.582) 0:03:33.171 *********** 2025-05-13 20:08:07.180974 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.180982 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.180991 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.181000 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.181008 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.181017 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.181025 | orchestrator | 2025-05-13 20:08:07.181034 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-13 20:08:07.181043 | orchestrator | Tuesday 13 May 2025 20:00:22 +0000 (0:00:00.623) 0:03:33.794 *********** 2025-05-13 20:08:07.181052 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.181061 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.181069 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.181105 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.181114 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.181122 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.181130 | orchestrator | 2025-05-13 20:08:07.181139 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-05-13 20:08:07.181147 | orchestrator | Tuesday 13 May 2025 20:00:22 +0000 (0:00:00.593) 0:03:34.388 *********** 2025-05-13 20:08:07.181155 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.181163 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.181171 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.181180 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.181189 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.181197 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.181206 | orchestrator | 2025-05-13 20:08:07.181215 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-05-13 20:08:07.181223 | orchestrator | Tuesday 13 May 2025 20:00:23 +0000 (0:00:00.822) 0:03:35.210 *********** 2025-05-13 20:08:07.181231 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-13 20:08:07.181238 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-13 20:08:07.181270 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-13 20:08:07.181279 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.181286 | orchestrator | 2025-05-13 20:08:07.181294 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-13 20:08:07.181301 | orchestrator | Tuesday 13 May 2025 20:00:23 +0000 (0:00:00.329) 0:03:35.540 *********** 2025-05-13 20:08:07.181309 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-13 20:08:07.181317 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-13 20:08:07.181332 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-13 20:08:07.181341 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.181349 | orchestrator | 2025-05-13 20:08:07.181358 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-13 20:08:07.181366 | orchestrator | Tuesday 13 May 2025 20:00:24 +0000 (0:00:00.348) 0:03:35.888 *********** 2025-05-13 20:08:07.181374 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-13 20:08:07.181382 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-13 20:08:07.181390 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-13 20:08:07.181399 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.181406 | orchestrator | 2025-05-13 20:08:07.181415 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-05-13 20:08:07.181424 | orchestrator | Tuesday 13 May 2025 20:00:24 +0000 (0:00:00.329) 0:03:36.218 *********** 2025-05-13 20:08:07.181432 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.181441 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.181449 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.181458 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.181467 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.181476 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.181484 | orchestrator | 2025-05-13 20:08:07.181492 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-05-13 20:08:07.181505 | orchestrator | Tuesday 13 May 2025 20:00:25 +0000 (0:00:00.519) 0:03:36.737 *********** 2025-05-13 20:08:07.181514 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-13 20:08:07.181522 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.181529 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-13 20:08:07.181536 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.181544 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-13 20:08:07.181550 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.181557 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-13 20:08:07.181564 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-13 20:08:07.181572 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-13 20:08:07.181580 | orchestrator | 2025-05-13 20:08:07.181588 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-05-13 20:08:07.181596 | orchestrator | Tuesday 13 May 2025 20:00:27 +0000 (0:00:01.931) 0:03:38.668 *********** 2025-05-13 20:08:07.181603 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.181611 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.181618 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.181626 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.181635 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.181643 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.181651 | orchestrator | 2025-05-13 20:08:07.181659 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-13 20:08:07.181667 | orchestrator | Tuesday 13 May 2025 20:00:30 +0000 (0:00:03.080) 0:03:41.748 *********** 2025-05-13 20:08:07.181675 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.181683 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.181691 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.181699 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.181709 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.181717 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.181725 | orchestrator | 2025-05-13 20:08:07.181734 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-05-13 20:08:07.181741 | orchestrator | Tuesday 13 May 2025 20:00:31 +0000 (0:00:01.363) 0:03:43.112 *********** 2025-05-13 20:08:07.181750 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.181758 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.181766 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.181775 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:08:07.181790 | orchestrator | 2025-05-13 20:08:07.181798 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-05-13 20:08:07.181806 | orchestrator | Tuesday 13 May 2025 20:00:32 +0000 (0:00:01.079) 0:03:44.192 *********** 2025-05-13 20:08:07.181814 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.181823 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.181831 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.181839 | orchestrator | 2025-05-13 20:08:07.181849 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-05-13 20:08:07.181904 | orchestrator | Tuesday 13 May 2025 20:00:32 +0000 (0:00:00.333) 0:03:44.525 *********** 2025-05-13 20:08:07.181914 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.181924 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.181934 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.181943 | orchestrator | 2025-05-13 20:08:07.181953 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-05-13 20:08:07.181963 | orchestrator | Tuesday 13 May 2025 20:00:34 +0000 (0:00:01.543) 0:03:46.069 *********** 2025-05-13 20:08:07.181973 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-13 20:08:07.181981 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-13 20:08:07.181991 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-13 20:08:07.182000 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.182009 | orchestrator | 2025-05-13 20:08:07.182049 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-05-13 20:08:07.182059 | orchestrator | Tuesday 13 May 2025 20:00:35 +0000 (0:00:00.636) 0:03:46.705 *********** 2025-05-13 20:08:07.182069 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.182080 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.182089 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.182098 | orchestrator | 2025-05-13 20:08:07.182107 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-05-13 20:08:07.182114 | orchestrator | Tuesday 13 May 2025 20:00:35 +0000 (0:00:00.470) 0:03:47.175 *********** 2025-05-13 20:08:07.182123 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.182131 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.182139 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.182147 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.182155 | orchestrator | 2025-05-13 20:08:07.182163 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-05-13 20:08:07.182171 | orchestrator | Tuesday 13 May 2025 20:00:36 +0000 (0:00:01.355) 0:03:48.531 *********** 2025-05-13 20:08:07.182180 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 20:08:07.182188 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 20:08:07.182195 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 20:08:07.182204 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.182213 | orchestrator | 2025-05-13 20:08:07.182222 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-05-13 20:08:07.182231 | orchestrator | Tuesday 13 May 2025 20:00:37 +0000 (0:00:00.379) 0:03:48.910 *********** 2025-05-13 20:08:07.182239 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.182268 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.182277 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.182285 | orchestrator | 2025-05-13 20:08:07.182293 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-05-13 20:08:07.182301 | orchestrator | Tuesday 13 May 2025 20:00:37 +0000 (0:00:00.329) 0:03:49.240 *********** 2025-05-13 20:08:07.182309 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.182317 | orchestrator | 2025-05-13 20:08:07.182332 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-05-13 20:08:07.182351 | orchestrator | Tuesday 13 May 2025 20:00:37 +0000 (0:00:00.234) 0:03:49.475 *********** 2025-05-13 20:08:07.182358 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.182366 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.182375 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.182383 | orchestrator | 2025-05-13 20:08:07.182392 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-05-13 20:08:07.182400 | orchestrator | Tuesday 13 May 2025 20:00:38 +0000 (0:00:00.307) 0:03:49.782 *********** 2025-05-13 20:08:07.182409 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.182417 | orchestrator | 2025-05-13 20:08:07.182426 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-05-13 20:08:07.182434 | orchestrator | Tuesday 13 May 2025 20:00:38 +0000 (0:00:00.245) 0:03:50.028 *********** 2025-05-13 20:08:07.182443 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.182451 | orchestrator | 2025-05-13 20:08:07.182459 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-05-13 20:08:07.182467 | orchestrator | Tuesday 13 May 2025 20:00:38 +0000 (0:00:00.204) 0:03:50.232 *********** 2025-05-13 20:08:07.182475 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.182483 | orchestrator | 2025-05-13 20:08:07.182491 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-05-13 20:08:07.182499 | orchestrator | Tuesday 13 May 2025 20:00:39 +0000 (0:00:00.379) 0:03:50.612 *********** 2025-05-13 20:08:07.182506 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.182513 | orchestrator | 2025-05-13 20:08:07.182521 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-05-13 20:08:07.182528 | orchestrator | Tuesday 13 May 2025 20:00:39 +0000 (0:00:00.232) 0:03:50.844 *********** 2025-05-13 20:08:07.182535 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.182543 | orchestrator | 2025-05-13 20:08:07.182552 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-05-13 20:08:07.182560 | orchestrator | Tuesday 13 May 2025 20:00:39 +0000 (0:00:00.255) 0:03:51.100 *********** 2025-05-13 20:08:07.182568 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 20:08:07.182576 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 20:08:07.182583 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 20:08:07.182591 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.182599 | orchestrator | 2025-05-13 20:08:07.182606 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-05-13 20:08:07.182613 | orchestrator | Tuesday 13 May 2025 20:00:39 +0000 (0:00:00.412) 0:03:51.512 *********** 2025-05-13 20:08:07.182620 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.182628 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.182636 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.182643 | orchestrator | 2025-05-13 20:08:07.182700 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-05-13 20:08:07.182710 | orchestrator | Tuesday 13 May 2025 20:00:40 +0000 (0:00:00.312) 0:03:51.824 *********** 2025-05-13 20:08:07.182718 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.182727 | orchestrator | 2025-05-13 20:08:07.182735 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-05-13 20:08:07.182743 | orchestrator | Tuesday 13 May 2025 20:00:40 +0000 (0:00:00.228) 0:03:52.053 *********** 2025-05-13 20:08:07.182751 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.182760 | orchestrator | 2025-05-13 20:08:07.182768 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-05-13 20:08:07.182775 | orchestrator | Tuesday 13 May 2025 20:00:40 +0000 (0:00:00.199) 0:03:52.253 *********** 2025-05-13 20:08:07.182783 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.182790 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.182798 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.182805 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.182823 | orchestrator | 2025-05-13 20:08:07.182831 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-05-13 20:08:07.182840 | orchestrator | Tuesday 13 May 2025 20:00:41 +0000 (0:00:01.095) 0:03:53.348 *********** 2025-05-13 20:08:07.182848 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.182856 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.182864 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.182873 | orchestrator | 2025-05-13 20:08:07.182881 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-05-13 20:08:07.182889 | orchestrator | Tuesday 13 May 2025 20:00:42 +0000 (0:00:00.338) 0:03:53.686 *********** 2025-05-13 20:08:07.182897 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.182905 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.182912 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.182921 | orchestrator | 2025-05-13 20:08:07.182928 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-05-13 20:08:07.182936 | orchestrator | Tuesday 13 May 2025 20:00:43 +0000 (0:00:01.201) 0:03:54.888 *********** 2025-05-13 20:08:07.182944 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 20:08:07.182951 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 20:08:07.182959 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 20:08:07.182966 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.182974 | orchestrator | 2025-05-13 20:08:07.182981 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-05-13 20:08:07.182988 | orchestrator | Tuesday 13 May 2025 20:00:44 +0000 (0:00:01.162) 0:03:56.051 *********** 2025-05-13 20:08:07.182996 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.183003 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.183011 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.183018 | orchestrator | 2025-05-13 20:08:07.183026 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-05-13 20:08:07.183033 | orchestrator | Tuesday 13 May 2025 20:00:44 +0000 (0:00:00.376) 0:03:56.428 *********** 2025-05-13 20:08:07.183041 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.183056 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.183065 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.183073 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.183081 | orchestrator | 2025-05-13 20:08:07.183089 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-05-13 20:08:07.183097 | orchestrator | Tuesday 13 May 2025 20:00:45 +0000 (0:00:01.034) 0:03:57.462 *********** 2025-05-13 20:08:07.183106 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.183113 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.183121 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.183129 | orchestrator | 2025-05-13 20:08:07.183136 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-05-13 20:08:07.183144 | orchestrator | Tuesday 13 May 2025 20:00:46 +0000 (0:00:00.377) 0:03:57.840 *********** 2025-05-13 20:08:07.183153 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.183162 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.183170 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.183177 | orchestrator | 2025-05-13 20:08:07.183185 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-05-13 20:08:07.183192 | orchestrator | Tuesday 13 May 2025 20:00:47 +0000 (0:00:01.146) 0:03:58.986 *********** 2025-05-13 20:08:07.183200 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 20:08:07.183208 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 20:08:07.183216 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 20:08:07.183223 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.183239 | orchestrator | 2025-05-13 20:08:07.183304 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-05-13 20:08:07.183316 | orchestrator | Tuesday 13 May 2025 20:00:48 +0000 (0:00:00.856) 0:03:59.842 *********** 2025-05-13 20:08:07.183324 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.183331 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.183339 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.183348 | orchestrator | 2025-05-13 20:08:07.183355 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-05-13 20:08:07.183364 | orchestrator | Tuesday 13 May 2025 20:00:48 +0000 (0:00:00.325) 0:04:00.168 *********** 2025-05-13 20:08:07.183371 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.183378 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.183386 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.183394 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.183402 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.183410 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.183418 | orchestrator | 2025-05-13 20:08:07.183426 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-05-13 20:08:07.183434 | orchestrator | Tuesday 13 May 2025 20:00:49 +0000 (0:00:00.871) 0:04:01.040 *********** 2025-05-13 20:08:07.183491 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.183501 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.183509 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.183516 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:08:07.183525 | orchestrator | 2025-05-13 20:08:07.183532 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-05-13 20:08:07.183540 | orchestrator | Tuesday 13 May 2025 20:00:50 +0000 (0:00:01.038) 0:04:02.079 *********** 2025-05-13 20:08:07.183548 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.183556 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.183564 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.183571 | orchestrator | 2025-05-13 20:08:07.183579 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-05-13 20:08:07.183587 | orchestrator | Tuesday 13 May 2025 20:00:50 +0000 (0:00:00.365) 0:04:02.444 *********** 2025-05-13 20:08:07.183594 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.183602 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.183609 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.183617 | orchestrator | 2025-05-13 20:08:07.183625 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-05-13 20:08:07.183632 | orchestrator | Tuesday 13 May 2025 20:00:52 +0000 (0:00:01.263) 0:04:03.707 *********** 2025-05-13 20:08:07.183639 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-13 20:08:07.183647 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-13 20:08:07.183654 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-13 20:08:07.183662 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.183670 | orchestrator | 2025-05-13 20:08:07.183678 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-05-13 20:08:07.183685 | orchestrator | Tuesday 13 May 2025 20:00:52 +0000 (0:00:00.832) 0:04:04.539 *********** 2025-05-13 20:08:07.183693 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.183701 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.183709 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.183717 | orchestrator | 2025-05-13 20:08:07.183725 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-05-13 20:08:07.183733 | orchestrator | 2025-05-13 20:08:07.183739 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-13 20:08:07.183745 | orchestrator | Tuesday 13 May 2025 20:00:53 +0000 (0:00:00.842) 0:04:05.382 *********** 2025-05-13 20:08:07.183754 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:08:07.183769 | orchestrator | 2025-05-13 20:08:07.183776 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-13 20:08:07.183784 | orchestrator | Tuesday 13 May 2025 20:00:54 +0000 (0:00:00.513) 0:04:05.895 *********** 2025-05-13 20:08:07.183791 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:08:07.183798 | orchestrator | 2025-05-13 20:08:07.183818 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-13 20:08:07.183825 | orchestrator | Tuesday 13 May 2025 20:00:55 +0000 (0:00:00.718) 0:04:06.614 *********** 2025-05-13 20:08:07.183832 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.183839 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.183847 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.183855 | orchestrator | 2025-05-13 20:08:07.183863 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-13 20:08:07.183871 | orchestrator | Tuesday 13 May 2025 20:00:55 +0000 (0:00:00.784) 0:04:07.398 *********** 2025-05-13 20:08:07.183879 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.183886 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.183894 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.183902 | orchestrator | 2025-05-13 20:08:07.183910 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-13 20:08:07.183918 | orchestrator | Tuesday 13 May 2025 20:00:56 +0000 (0:00:00.322) 0:04:07.721 *********** 2025-05-13 20:08:07.183924 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.183932 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.183939 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.183946 | orchestrator | 2025-05-13 20:08:07.183954 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-13 20:08:07.183961 | orchestrator | Tuesday 13 May 2025 20:00:56 +0000 (0:00:00.286) 0:04:08.007 *********** 2025-05-13 20:08:07.183969 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.183976 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.183983 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.183990 | orchestrator | 2025-05-13 20:08:07.183996 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-13 20:08:07.184002 | orchestrator | Tuesday 13 May 2025 20:00:56 +0000 (0:00:00.556) 0:04:08.564 *********** 2025-05-13 20:08:07.184009 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.184015 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.184022 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.184029 | orchestrator | 2025-05-13 20:08:07.184037 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-13 20:08:07.184045 | orchestrator | Tuesday 13 May 2025 20:00:57 +0000 (0:00:00.707) 0:04:09.272 *********** 2025-05-13 20:08:07.184053 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.184061 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.184069 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.184077 | orchestrator | 2025-05-13 20:08:07.184085 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-13 20:08:07.184093 | orchestrator | Tuesday 13 May 2025 20:00:58 +0000 (0:00:00.305) 0:04:09.577 *********** 2025-05-13 20:08:07.184101 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.184109 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.184117 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.184125 | orchestrator | 2025-05-13 20:08:07.184133 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-13 20:08:07.184180 | orchestrator | Tuesday 13 May 2025 20:00:58 +0000 (0:00:00.304) 0:04:09.881 *********** 2025-05-13 20:08:07.184189 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.184197 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.184204 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.184211 | orchestrator | 2025-05-13 20:08:07.184217 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-13 20:08:07.184235 | orchestrator | Tuesday 13 May 2025 20:00:59 +0000 (0:00:01.004) 0:04:10.885 *********** 2025-05-13 20:08:07.184242 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.184271 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.184279 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.184285 | orchestrator | 2025-05-13 20:08:07.184292 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-13 20:08:07.184299 | orchestrator | Tuesday 13 May 2025 20:01:00 +0000 (0:00:00.695) 0:04:11.580 *********** 2025-05-13 20:08:07.184306 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.184313 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.184321 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.184328 | orchestrator | 2025-05-13 20:08:07.184336 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-13 20:08:07.184343 | orchestrator | Tuesday 13 May 2025 20:01:00 +0000 (0:00:00.320) 0:04:11.901 *********** 2025-05-13 20:08:07.184350 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.184357 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.184365 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.184372 | orchestrator | 2025-05-13 20:08:07.184380 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-13 20:08:07.184387 | orchestrator | Tuesday 13 May 2025 20:01:00 +0000 (0:00:00.320) 0:04:12.222 *********** 2025-05-13 20:08:07.184395 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.184402 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.184410 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.184417 | orchestrator | 2025-05-13 20:08:07.184424 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-13 20:08:07.184433 | orchestrator | Tuesday 13 May 2025 20:01:01 +0000 (0:00:00.526) 0:04:12.749 *********** 2025-05-13 20:08:07.184440 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.184448 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.184456 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.184463 | orchestrator | 2025-05-13 20:08:07.184471 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-13 20:08:07.184479 | orchestrator | Tuesday 13 May 2025 20:01:01 +0000 (0:00:00.299) 0:04:13.048 *********** 2025-05-13 20:08:07.184487 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.184495 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.184503 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.184511 | orchestrator | 2025-05-13 20:08:07.184518 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-13 20:08:07.184525 | orchestrator | Tuesday 13 May 2025 20:01:01 +0000 (0:00:00.353) 0:04:13.402 *********** 2025-05-13 20:08:07.184534 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.184542 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.184549 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.184557 | orchestrator | 2025-05-13 20:08:07.184565 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-13 20:08:07.184580 | orchestrator | Tuesday 13 May 2025 20:01:02 +0000 (0:00:00.307) 0:04:13.709 *********** 2025-05-13 20:08:07.184588 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.184596 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.184604 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.184611 | orchestrator | 2025-05-13 20:08:07.184620 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-13 20:08:07.184628 | orchestrator | Tuesday 13 May 2025 20:01:02 +0000 (0:00:00.493) 0:04:14.203 *********** 2025-05-13 20:08:07.184637 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.184645 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.184653 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.184662 | orchestrator | 2025-05-13 20:08:07.184670 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-13 20:08:07.184684 | orchestrator | Tuesday 13 May 2025 20:01:02 +0000 (0:00:00.320) 0:04:14.523 *********** 2025-05-13 20:08:07.184692 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.184700 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.184708 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.184717 | orchestrator | 2025-05-13 20:08:07.184725 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-13 20:08:07.184733 | orchestrator | Tuesday 13 May 2025 20:01:03 +0000 (0:00:00.415) 0:04:14.939 *********** 2025-05-13 20:08:07.184742 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.184750 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.184758 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.184767 | orchestrator | 2025-05-13 20:08:07.184774 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-05-13 20:08:07.184781 | orchestrator | Tuesday 13 May 2025 20:01:04 +0000 (0:00:00.794) 0:04:15.733 *********** 2025-05-13 20:08:07.184789 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.184796 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.184804 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.184812 | orchestrator | 2025-05-13 20:08:07.184821 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-05-13 20:08:07.184829 | orchestrator | Tuesday 13 May 2025 20:01:04 +0000 (0:00:00.451) 0:04:16.184 *********** 2025-05-13 20:08:07.184838 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:08:07.184846 | orchestrator | 2025-05-13 20:08:07.184854 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-05-13 20:08:07.184863 | orchestrator | Tuesday 13 May 2025 20:01:05 +0000 (0:00:00.600) 0:04:16.785 *********** 2025-05-13 20:08:07.184871 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.184879 | orchestrator | 2025-05-13 20:08:07.184888 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-05-13 20:08:07.184894 | orchestrator | Tuesday 13 May 2025 20:01:05 +0000 (0:00:00.150) 0:04:16.935 *********** 2025-05-13 20:08:07.184901 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-13 20:08:07.184908 | orchestrator | 2025-05-13 20:08:07.184957 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-05-13 20:08:07.184966 | orchestrator | Tuesday 13 May 2025 20:01:06 +0000 (0:00:01.357) 0:04:18.293 *********** 2025-05-13 20:08:07.184974 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.184983 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.184991 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.184999 | orchestrator | 2025-05-13 20:08:07.185007 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-05-13 20:08:07.185014 | orchestrator | Tuesday 13 May 2025 20:01:07 +0000 (0:00:00.451) 0:04:18.745 *********** 2025-05-13 20:08:07.185023 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.185031 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.185040 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.185048 | orchestrator | 2025-05-13 20:08:07.185056 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-05-13 20:08:07.185065 | orchestrator | Tuesday 13 May 2025 20:01:07 +0000 (0:00:00.371) 0:04:19.117 *********** 2025-05-13 20:08:07.185073 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.185082 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.185090 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.185098 | orchestrator | 2025-05-13 20:08:07.185106 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-05-13 20:08:07.185113 | orchestrator | Tuesday 13 May 2025 20:01:08 +0000 (0:00:01.244) 0:04:20.361 *********** 2025-05-13 20:08:07.185121 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.185129 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.185136 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.185144 | orchestrator | 2025-05-13 20:08:07.185151 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-05-13 20:08:07.185166 | orchestrator | Tuesday 13 May 2025 20:01:09 +0000 (0:00:00.902) 0:04:21.263 *********** 2025-05-13 20:08:07.185174 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.185181 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.185189 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.185196 | orchestrator | 2025-05-13 20:08:07.185204 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-05-13 20:08:07.185211 | orchestrator | Tuesday 13 May 2025 20:01:10 +0000 (0:00:00.613) 0:04:21.877 *********** 2025-05-13 20:08:07.185219 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.185227 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.185235 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.185243 | orchestrator | 2025-05-13 20:08:07.185271 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-05-13 20:08:07.185279 | orchestrator | Tuesday 13 May 2025 20:01:10 +0000 (0:00:00.627) 0:04:22.505 *********** 2025-05-13 20:08:07.185287 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.185295 | orchestrator | 2025-05-13 20:08:07.185304 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-05-13 20:08:07.185312 | orchestrator | Tuesday 13 May 2025 20:01:12 +0000 (0:00:01.237) 0:04:23.743 *********** 2025-05-13 20:08:07.185320 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.185328 | orchestrator | 2025-05-13 20:08:07.185335 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-05-13 20:08:07.185343 | orchestrator | Tuesday 13 May 2025 20:01:12 +0000 (0:00:00.678) 0:04:24.421 *********** 2025-05-13 20:08:07.185357 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-13 20:08:07.185365 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:08:07.185373 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:08:07.185381 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-13 20:08:07.185389 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-05-13 20:08:07.185397 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-13 20:08:07.185405 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-13 20:08:07.185413 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-05-13 20:08:07.185422 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-13 20:08:07.185429 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-05-13 20:08:07.185437 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-05-13 20:08:07.185445 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-05-13 20:08:07.185452 | orchestrator | 2025-05-13 20:08:07.185459 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-05-13 20:08:07.185467 | orchestrator | Tuesday 13 May 2025 20:01:16 +0000 (0:00:03.474) 0:04:27.895 *********** 2025-05-13 20:08:07.185474 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.185482 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.185490 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.185497 | orchestrator | 2025-05-13 20:08:07.185504 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-05-13 20:08:07.185511 | orchestrator | Tuesday 13 May 2025 20:01:17 +0000 (0:00:01.555) 0:04:29.451 *********** 2025-05-13 20:08:07.185518 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.185526 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.185533 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.185540 | orchestrator | 2025-05-13 20:08:07.185546 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-05-13 20:08:07.185554 | orchestrator | Tuesday 13 May 2025 20:01:18 +0000 (0:00:00.378) 0:04:29.829 *********** 2025-05-13 20:08:07.185561 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.185568 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.185576 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.185584 | orchestrator | 2025-05-13 20:08:07.185598 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-05-13 20:08:07.185606 | orchestrator | Tuesday 13 May 2025 20:01:18 +0000 (0:00:00.330) 0:04:30.160 *********** 2025-05-13 20:08:07.185614 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.185622 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.185630 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.185639 | orchestrator | 2025-05-13 20:08:07.185647 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-05-13 20:08:07.185697 | orchestrator | Tuesday 13 May 2025 20:01:20 +0000 (0:00:02.109) 0:04:32.270 *********** 2025-05-13 20:08:07.185709 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.185717 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.185726 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.185734 | orchestrator | 2025-05-13 20:08:07.185743 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-05-13 20:08:07.185752 | orchestrator | Tuesday 13 May 2025 20:01:22 +0000 (0:00:01.679) 0:04:33.949 *********** 2025-05-13 20:08:07.185760 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.185768 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.185775 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.185783 | orchestrator | 2025-05-13 20:08:07.185790 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-05-13 20:08:07.185797 | orchestrator | Tuesday 13 May 2025 20:01:22 +0000 (0:00:00.259) 0:04:34.209 *********** 2025-05-13 20:08:07.185804 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:08:07.185811 | orchestrator | 2025-05-13 20:08:07.185818 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-05-13 20:08:07.185825 | orchestrator | Tuesday 13 May 2025 20:01:23 +0000 (0:00:00.409) 0:04:34.618 *********** 2025-05-13 20:08:07.185833 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.185841 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.185849 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.185858 | orchestrator | 2025-05-13 20:08:07.185864 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-05-13 20:08:07.185871 | orchestrator | Tuesday 13 May 2025 20:01:23 +0000 (0:00:00.360) 0:04:34.979 *********** 2025-05-13 20:08:07.185879 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.185887 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.185895 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.185903 | orchestrator | 2025-05-13 20:08:07.185912 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-05-13 20:08:07.185920 | orchestrator | Tuesday 13 May 2025 20:01:23 +0000 (0:00:00.254) 0:04:35.233 *********** 2025-05-13 20:08:07.185928 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:08:07.185937 | orchestrator | 2025-05-13 20:08:07.185946 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-05-13 20:08:07.185954 | orchestrator | Tuesday 13 May 2025 20:01:24 +0000 (0:00:00.416) 0:04:35.649 *********** 2025-05-13 20:08:07.185962 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.185970 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.185979 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.185987 | orchestrator | 2025-05-13 20:08:07.185995 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-05-13 20:08:07.186004 | orchestrator | Tuesday 13 May 2025 20:01:25 +0000 (0:00:01.897) 0:04:37.547 *********** 2025-05-13 20:08:07.186038 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.186050 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.186059 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.186068 | orchestrator | 2025-05-13 20:08:07.186083 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-05-13 20:08:07.186092 | orchestrator | Tuesday 13 May 2025 20:01:27 +0000 (0:00:01.232) 0:04:38.779 *********** 2025-05-13 20:08:07.186109 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.186117 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.186124 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.186131 | orchestrator | 2025-05-13 20:08:07.186139 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-05-13 20:08:07.186146 | orchestrator | Tuesday 13 May 2025 20:01:28 +0000 (0:00:01.686) 0:04:40.466 *********** 2025-05-13 20:08:07.186154 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.186161 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.186168 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.186175 | orchestrator | 2025-05-13 20:08:07.186183 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-05-13 20:08:07.186191 | orchestrator | Tuesday 13 May 2025 20:01:31 +0000 (0:00:02.403) 0:04:42.869 *********** 2025-05-13 20:08:07.186198 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:08:07.186206 | orchestrator | 2025-05-13 20:08:07.186213 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-05-13 20:08:07.186220 | orchestrator | Tuesday 13 May 2025 20:01:32 +0000 (0:00:00.810) 0:04:43.680 *********** 2025-05-13 20:08:07.186228 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-05-13 20:08:07.186235 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.186242 | orchestrator | 2025-05-13 20:08:07.186305 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-05-13 20:08:07.186316 | orchestrator | Tuesday 13 May 2025 20:01:53 +0000 (0:00:21.779) 0:05:05.460 *********** 2025-05-13 20:08:07.186324 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.186332 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.186338 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.186345 | orchestrator | 2025-05-13 20:08:07.186353 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-05-13 20:08:07.186361 | orchestrator | Tuesday 13 May 2025 20:02:04 +0000 (0:00:10.141) 0:05:15.601 *********** 2025-05-13 20:08:07.186368 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.186376 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.186384 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.186392 | orchestrator | 2025-05-13 20:08:07.186400 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-05-13 20:08:07.186407 | orchestrator | Tuesday 13 May 2025 20:02:04 +0000 (0:00:00.520) 0:05:16.122 *********** 2025-05-13 20:08:07.186463 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1b3533003d6a78cf5e3ae274d6b41af05d834d05'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-05-13 20:08:07.186477 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1b3533003d6a78cf5e3ae274d6b41af05d834d05'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-05-13 20:08:07.186488 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1b3533003d6a78cf5e3ae274d6b41af05d834d05'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-05-13 20:08:07.186497 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1b3533003d6a78cf5e3ae274d6b41af05d834d05'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-05-13 20:08:07.186514 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1b3533003d6a78cf5e3ae274d6b41af05d834d05'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-05-13 20:08:07.186529 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1b3533003d6a78cf5e3ae274d6b41af05d834d05'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__1b3533003d6a78cf5e3ae274d6b41af05d834d05'}])  2025-05-13 20:08:07.186540 | orchestrator | 2025-05-13 20:08:07.186548 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-13 20:08:07.186555 | orchestrator | Tuesday 13 May 2025 20:02:18 +0000 (0:00:13.991) 0:05:30.113 *********** 2025-05-13 20:08:07.186563 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.186569 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.186576 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.186583 | orchestrator | 2025-05-13 20:08:07.186590 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-05-13 20:08:07.186597 | orchestrator | Tuesday 13 May 2025 20:02:18 +0000 (0:00:00.330) 0:05:30.444 *********** 2025-05-13 20:08:07.186604 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:08:07.186611 | orchestrator | 2025-05-13 20:08:07.186618 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-05-13 20:08:07.186626 | orchestrator | Tuesday 13 May 2025 20:02:19 +0000 (0:00:00.783) 0:05:31.227 *********** 2025-05-13 20:08:07.186635 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.186643 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.186652 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.186660 | orchestrator | 2025-05-13 20:08:07.186668 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-05-13 20:08:07.186676 | orchestrator | Tuesday 13 May 2025 20:02:20 +0000 (0:00:00.405) 0:05:31.633 *********** 2025-05-13 20:08:07.186683 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.186690 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.186697 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.186705 | orchestrator | 2025-05-13 20:08:07.186712 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-05-13 20:08:07.186720 | orchestrator | Tuesday 13 May 2025 20:02:20 +0000 (0:00:00.406) 0:05:32.039 *********** 2025-05-13 20:08:07.186728 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-13 20:08:07.186736 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-13 20:08:07.186743 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-13 20:08:07.186750 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.186757 | orchestrator | 2025-05-13 20:08:07.186765 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-05-13 20:08:07.186772 | orchestrator | Tuesday 13 May 2025 20:02:21 +0000 (0:00:00.993) 0:05:33.033 *********** 2025-05-13 20:08:07.186779 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.186786 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.186793 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.186800 | orchestrator | 2025-05-13 20:08:07.186807 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-05-13 20:08:07.186814 | orchestrator | 2025-05-13 20:08:07.186822 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-13 20:08:07.186877 | orchestrator | Tuesday 13 May 2025 20:02:22 +0000 (0:00:00.914) 0:05:33.947 *********** 2025-05-13 20:08:07.186886 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:08:07.186894 | orchestrator | 2025-05-13 20:08:07.186901 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-13 20:08:07.186908 | orchestrator | Tuesday 13 May 2025 20:02:22 +0000 (0:00:00.595) 0:05:34.543 *********** 2025-05-13 20:08:07.186916 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:08:07.186924 | orchestrator | 2025-05-13 20:08:07.186931 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-13 20:08:07.186938 | orchestrator | Tuesday 13 May 2025 20:02:23 +0000 (0:00:00.896) 0:05:35.439 *********** 2025-05-13 20:08:07.186946 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.186953 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.186961 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.186969 | orchestrator | 2025-05-13 20:08:07.186976 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-13 20:08:07.186984 | orchestrator | Tuesday 13 May 2025 20:02:24 +0000 (0:00:00.738) 0:05:36.178 *********** 2025-05-13 20:08:07.186991 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.186999 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.187006 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.187013 | orchestrator | 2025-05-13 20:08:07.187021 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-13 20:08:07.187028 | orchestrator | Tuesday 13 May 2025 20:02:24 +0000 (0:00:00.279) 0:05:36.458 *********** 2025-05-13 20:08:07.187036 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.187043 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.187050 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.187058 | orchestrator | 2025-05-13 20:08:07.187065 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-13 20:08:07.187072 | orchestrator | Tuesday 13 May 2025 20:02:25 +0000 (0:00:00.548) 0:05:37.006 *********** 2025-05-13 20:08:07.187080 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.187086 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.187094 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.187101 | orchestrator | 2025-05-13 20:08:07.187108 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-13 20:08:07.187116 | orchestrator | Tuesday 13 May 2025 20:02:25 +0000 (0:00:00.379) 0:05:37.385 *********** 2025-05-13 20:08:07.187123 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.187131 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.187155 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.187163 | orchestrator | 2025-05-13 20:08:07.187171 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-13 20:08:07.187178 | orchestrator | Tuesday 13 May 2025 20:02:26 +0000 (0:00:00.689) 0:05:38.075 *********** 2025-05-13 20:08:07.187186 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.187193 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.187200 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.187206 | orchestrator | 2025-05-13 20:08:07.187218 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-13 20:08:07.187224 | orchestrator | Tuesday 13 May 2025 20:02:26 +0000 (0:00:00.293) 0:05:38.368 *********** 2025-05-13 20:08:07.187231 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.187238 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.187244 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.187268 | orchestrator | 2025-05-13 20:08:07.187275 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-13 20:08:07.187281 | orchestrator | Tuesday 13 May 2025 20:02:27 +0000 (0:00:00.529) 0:05:38.898 *********** 2025-05-13 20:08:07.187295 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.187302 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.187309 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.187316 | orchestrator | 2025-05-13 20:08:07.187323 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-13 20:08:07.187330 | orchestrator | Tuesday 13 May 2025 20:02:28 +0000 (0:00:00.729) 0:05:39.628 *********** 2025-05-13 20:08:07.187337 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.187344 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.187352 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.187359 | orchestrator | 2025-05-13 20:08:07.187366 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-13 20:08:07.187374 | orchestrator | Tuesday 13 May 2025 20:02:28 +0000 (0:00:00.740) 0:05:40.368 *********** 2025-05-13 20:08:07.187381 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.187389 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.187396 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.187404 | orchestrator | 2025-05-13 20:08:07.187411 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-13 20:08:07.187419 | orchestrator | Tuesday 13 May 2025 20:02:29 +0000 (0:00:00.327) 0:05:40.696 *********** 2025-05-13 20:08:07.187426 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.187434 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.187440 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.187448 | orchestrator | 2025-05-13 20:08:07.187455 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-13 20:08:07.187463 | orchestrator | Tuesday 13 May 2025 20:02:29 +0000 (0:00:00.587) 0:05:41.283 *********** 2025-05-13 20:08:07.187470 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.187477 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.187485 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.187493 | orchestrator | 2025-05-13 20:08:07.187500 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-13 20:08:07.187507 | orchestrator | Tuesday 13 May 2025 20:02:30 +0000 (0:00:00.314) 0:05:41.598 *********** 2025-05-13 20:08:07.187515 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.187522 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.187530 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.187537 | orchestrator | 2025-05-13 20:08:07.187544 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-13 20:08:07.187586 | orchestrator | Tuesday 13 May 2025 20:02:30 +0000 (0:00:00.301) 0:05:41.900 *********** 2025-05-13 20:08:07.187595 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.187602 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.187609 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.187616 | orchestrator | 2025-05-13 20:08:07.187623 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-13 20:08:07.187630 | orchestrator | Tuesday 13 May 2025 20:02:30 +0000 (0:00:00.315) 0:05:42.215 *********** 2025-05-13 20:08:07.187638 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.187645 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.187653 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.187660 | orchestrator | 2025-05-13 20:08:07.187668 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-13 20:08:07.187675 | orchestrator | Tuesday 13 May 2025 20:02:31 +0000 (0:00:00.532) 0:05:42.747 *********** 2025-05-13 20:08:07.187683 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.187690 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.187697 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.187704 | orchestrator | 2025-05-13 20:08:07.187711 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-13 20:08:07.187719 | orchestrator | Tuesday 13 May 2025 20:02:31 +0000 (0:00:00.332) 0:05:43.080 *********** 2025-05-13 20:08:07.187726 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.187741 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.187748 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.187756 | orchestrator | 2025-05-13 20:08:07.187763 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-13 20:08:07.187770 | orchestrator | Tuesday 13 May 2025 20:02:31 +0000 (0:00:00.327) 0:05:43.407 *********** 2025-05-13 20:08:07.187777 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.187783 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.187790 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.187796 | orchestrator | 2025-05-13 20:08:07.187802 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-13 20:08:07.187809 | orchestrator | Tuesday 13 May 2025 20:02:32 +0000 (0:00:00.354) 0:05:43.762 *********** 2025-05-13 20:08:07.187817 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.187824 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.187831 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.187839 | orchestrator | 2025-05-13 20:08:07.187846 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-05-13 20:08:07.187854 | orchestrator | Tuesday 13 May 2025 20:02:32 +0000 (0:00:00.801) 0:05:44.563 *********** 2025-05-13 20:08:07.187861 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-13 20:08:07.187869 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-13 20:08:07.187876 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-13 20:08:07.187884 | orchestrator | 2025-05-13 20:08:07.187891 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-05-13 20:08:07.187898 | orchestrator | Tuesday 13 May 2025 20:02:33 +0000 (0:00:00.688) 0:05:45.252 *********** 2025-05-13 20:08:07.187915 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:08:07.187923 | orchestrator | 2025-05-13 20:08:07.187930 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-05-13 20:08:07.187938 | orchestrator | Tuesday 13 May 2025 20:02:34 +0000 (0:00:00.499) 0:05:45.751 *********** 2025-05-13 20:08:07.187945 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.187951 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.187958 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.187964 | orchestrator | 2025-05-13 20:08:07.187971 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-05-13 20:08:07.187978 | orchestrator | Tuesday 13 May 2025 20:02:35 +0000 (0:00:00.920) 0:05:46.672 *********** 2025-05-13 20:08:07.187984 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.187991 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.187997 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.188003 | orchestrator | 2025-05-13 20:08:07.188010 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-05-13 20:08:07.188016 | orchestrator | Tuesday 13 May 2025 20:02:35 +0000 (0:00:00.325) 0:05:46.998 *********** 2025-05-13 20:08:07.188023 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-13 20:08:07.188030 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-13 20:08:07.188036 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-13 20:08:07.188043 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-05-13 20:08:07.188050 | orchestrator | 2025-05-13 20:08:07.188057 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-05-13 20:08:07.188063 | orchestrator | Tuesday 13 May 2025 20:02:46 +0000 (0:00:10.901) 0:05:57.900 *********** 2025-05-13 20:08:07.188070 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.188077 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.188083 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.188091 | orchestrator | 2025-05-13 20:08:07.188097 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-05-13 20:08:07.188104 | orchestrator | Tuesday 13 May 2025 20:02:46 +0000 (0:00:00.332) 0:05:58.233 *********** 2025-05-13 20:08:07.188116 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-13 20:08:07.188123 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-13 20:08:07.188130 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-13 20:08:07.188137 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-13 20:08:07.188144 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:08:07.188151 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:08:07.188158 | orchestrator | 2025-05-13 20:08:07.188165 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-05-13 20:08:07.188172 | orchestrator | Tuesday 13 May 2025 20:02:49 +0000 (0:00:02.614) 0:06:00.847 *********** 2025-05-13 20:08:07.188214 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-13 20:08:07.188222 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-13 20:08:07.188229 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-13 20:08:07.188236 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-13 20:08:07.188243 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-13 20:08:07.188301 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-13 20:08:07.188308 | orchestrator | 2025-05-13 20:08:07.188315 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-05-13 20:08:07.188321 | orchestrator | Tuesday 13 May 2025 20:02:50 +0000 (0:00:01.158) 0:06:02.005 *********** 2025-05-13 20:08:07.188328 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.188334 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.188341 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.188348 | orchestrator | 2025-05-13 20:08:07.188355 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-05-13 20:08:07.188362 | orchestrator | Tuesday 13 May 2025 20:02:51 +0000 (0:00:00.694) 0:06:02.700 *********** 2025-05-13 20:08:07.188369 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.188376 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.188383 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.188391 | orchestrator | 2025-05-13 20:08:07.188398 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-05-13 20:08:07.188405 | orchestrator | Tuesday 13 May 2025 20:02:51 +0000 (0:00:00.348) 0:06:03.049 *********** 2025-05-13 20:08:07.188412 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.188419 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.188426 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.188432 | orchestrator | 2025-05-13 20:08:07.188439 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-05-13 20:08:07.188445 | orchestrator | Tuesday 13 May 2025 20:02:52 +0000 (0:00:00.529) 0:06:03.579 *********** 2025-05-13 20:08:07.188452 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:08:07.188459 | orchestrator | 2025-05-13 20:08:07.188467 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-05-13 20:08:07.188474 | orchestrator | Tuesday 13 May 2025 20:02:52 +0000 (0:00:00.521) 0:06:04.100 *********** 2025-05-13 20:08:07.188481 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.188488 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.188496 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.188503 | orchestrator | 2025-05-13 20:08:07.188510 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-05-13 20:08:07.188517 | orchestrator | Tuesday 13 May 2025 20:02:52 +0000 (0:00:00.316) 0:06:04.416 *********** 2025-05-13 20:08:07.188523 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.188529 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.188535 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.188541 | orchestrator | 2025-05-13 20:08:07.188547 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-05-13 20:08:07.188561 | orchestrator | Tuesday 13 May 2025 20:02:53 +0000 (0:00:00.353) 0:06:04.770 *********** 2025-05-13 20:08:07.188574 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:08:07.188582 | orchestrator | 2025-05-13 20:08:07.188589 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-05-13 20:08:07.188597 | orchestrator | Tuesday 13 May 2025 20:02:54 +0000 (0:00:00.855) 0:06:05.625 *********** 2025-05-13 20:08:07.188604 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.188611 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.188619 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.188626 | orchestrator | 2025-05-13 20:08:07.188634 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-05-13 20:08:07.188641 | orchestrator | Tuesday 13 May 2025 20:02:55 +0000 (0:00:01.270) 0:06:06.896 *********** 2025-05-13 20:08:07.188649 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.188656 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.188664 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.188671 | orchestrator | 2025-05-13 20:08:07.188679 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-05-13 20:08:07.188686 | orchestrator | Tuesday 13 May 2025 20:02:56 +0000 (0:00:01.112) 0:06:08.009 *********** 2025-05-13 20:08:07.188694 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.188701 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.188708 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.188716 | orchestrator | 2025-05-13 20:08:07.188723 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-05-13 20:08:07.188731 | orchestrator | Tuesday 13 May 2025 20:02:58 +0000 (0:00:01.977) 0:06:09.987 *********** 2025-05-13 20:08:07.188738 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.188746 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.188753 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.188761 | orchestrator | 2025-05-13 20:08:07.188769 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-05-13 20:08:07.188776 | orchestrator | Tuesday 13 May 2025 20:03:00 +0000 (0:00:01.926) 0:06:11.913 *********** 2025-05-13 20:08:07.188782 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.188788 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.188794 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-05-13 20:08:07.188801 | orchestrator | 2025-05-13 20:08:07.188809 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-05-13 20:08:07.188817 | orchestrator | Tuesday 13 May 2025 20:03:00 +0000 (0:00:00.408) 0:06:12.322 *********** 2025-05-13 20:08:07.188824 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-05-13 20:08:07.188832 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-05-13 20:08:07.188873 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-05-13 20:08:07.188881 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-05-13 20:08:07.188888 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-13 20:08:07.188894 | orchestrator | 2025-05-13 20:08:07.188900 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-05-13 20:08:07.188907 | orchestrator | Tuesday 13 May 2025 20:03:25 +0000 (0:00:24.299) 0:06:36.621 *********** 2025-05-13 20:08:07.188914 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-13 20:08:07.188921 | orchestrator | 2025-05-13 20:08:07.188927 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-05-13 20:08:07.188934 | orchestrator | Tuesday 13 May 2025 20:03:26 +0000 (0:00:01.661) 0:06:38.283 *********** 2025-05-13 20:08:07.188940 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.188954 | orchestrator | 2025-05-13 20:08:07.188961 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-05-13 20:08:07.188968 | orchestrator | Tuesday 13 May 2025 20:03:27 +0000 (0:00:00.847) 0:06:39.131 *********** 2025-05-13 20:08:07.188975 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.188981 | orchestrator | 2025-05-13 20:08:07.188988 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-05-13 20:08:07.188994 | orchestrator | Tuesday 13 May 2025 20:03:27 +0000 (0:00:00.156) 0:06:39.287 *********** 2025-05-13 20:08:07.189000 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-05-13 20:08:07.189006 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-05-13 20:08:07.189013 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-05-13 20:08:07.189019 | orchestrator | 2025-05-13 20:08:07.189026 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-05-13 20:08:07.189032 | orchestrator | Tuesday 13 May 2025 20:03:34 +0000 (0:00:06.321) 0:06:45.608 *********** 2025-05-13 20:08:07.189039 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-05-13 20:08:07.189046 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-05-13 20:08:07.189052 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-05-13 20:08:07.189059 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-05-13 20:08:07.189065 | orchestrator | 2025-05-13 20:08:07.189071 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-13 20:08:07.189078 | orchestrator | Tuesday 13 May 2025 20:03:38 +0000 (0:00:04.607) 0:06:50.216 *********** 2025-05-13 20:08:07.189083 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.189090 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.189096 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.189102 | orchestrator | 2025-05-13 20:08:07.189108 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-05-13 20:08:07.189119 | orchestrator | Tuesday 13 May 2025 20:03:39 +0000 (0:00:00.862) 0:06:51.078 *********** 2025-05-13 20:08:07.189126 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:08:07.189132 | orchestrator | 2025-05-13 20:08:07.189139 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-05-13 20:08:07.189145 | orchestrator | Tuesday 13 May 2025 20:03:40 +0000 (0:00:00.506) 0:06:51.585 *********** 2025-05-13 20:08:07.189152 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.189159 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.189166 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.189173 | orchestrator | 2025-05-13 20:08:07.189180 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-05-13 20:08:07.189187 | orchestrator | Tuesday 13 May 2025 20:03:40 +0000 (0:00:00.295) 0:06:51.881 *********** 2025-05-13 20:08:07.189194 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.189200 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.189207 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.189214 | orchestrator | 2025-05-13 20:08:07.189221 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-05-13 20:08:07.189228 | orchestrator | Tuesday 13 May 2025 20:03:41 +0000 (0:00:01.433) 0:06:53.315 *********** 2025-05-13 20:08:07.189235 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-13 20:08:07.189242 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-13 20:08:07.189265 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-13 20:08:07.189272 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.189279 | orchestrator | 2025-05-13 20:08:07.189286 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-05-13 20:08:07.189293 | orchestrator | Tuesday 13 May 2025 20:03:42 +0000 (0:00:00.644) 0:06:53.959 *********** 2025-05-13 20:08:07.189306 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.189313 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.189320 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.189327 | orchestrator | 2025-05-13 20:08:07.189334 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-05-13 20:08:07.189341 | orchestrator | 2025-05-13 20:08:07.189347 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-13 20:08:07.189354 | orchestrator | Tuesday 13 May 2025 20:03:42 +0000 (0:00:00.562) 0:06:54.522 *********** 2025-05-13 20:08:07.189361 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.189368 | orchestrator | 2025-05-13 20:08:07.189375 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-13 20:08:07.189382 | orchestrator | Tuesday 13 May 2025 20:03:43 +0000 (0:00:00.743) 0:06:55.265 *********** 2025-05-13 20:08:07.189420 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.189429 | orchestrator | 2025-05-13 20:08:07.189435 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-13 20:08:07.189441 | orchestrator | Tuesday 13 May 2025 20:03:44 +0000 (0:00:00.527) 0:06:55.793 *********** 2025-05-13 20:08:07.189448 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.189453 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.189459 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.189465 | orchestrator | 2025-05-13 20:08:07.189471 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-13 20:08:07.189477 | orchestrator | Tuesday 13 May 2025 20:03:44 +0000 (0:00:00.310) 0:06:56.103 *********** 2025-05-13 20:08:07.189483 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.189489 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.189496 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.189502 | orchestrator | 2025-05-13 20:08:07.189509 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-13 20:08:07.189515 | orchestrator | Tuesday 13 May 2025 20:03:45 +0000 (0:00:00.930) 0:06:57.034 *********** 2025-05-13 20:08:07.189521 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.189527 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.189533 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.189540 | orchestrator | 2025-05-13 20:08:07.189546 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-13 20:08:07.189553 | orchestrator | Tuesday 13 May 2025 20:03:46 +0000 (0:00:00.687) 0:06:57.721 *********** 2025-05-13 20:08:07.189560 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.189566 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.189573 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.189579 | orchestrator | 2025-05-13 20:08:07.189586 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-13 20:08:07.189593 | orchestrator | Tuesday 13 May 2025 20:03:46 +0000 (0:00:00.654) 0:06:58.375 *********** 2025-05-13 20:08:07.189599 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.189606 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.189612 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.189619 | orchestrator | 2025-05-13 20:08:07.189625 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-13 20:08:07.189631 | orchestrator | Tuesday 13 May 2025 20:03:47 +0000 (0:00:00.288) 0:06:58.664 *********** 2025-05-13 20:08:07.189637 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.189643 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.189649 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.189655 | orchestrator | 2025-05-13 20:08:07.189662 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-13 20:08:07.189668 | orchestrator | Tuesday 13 May 2025 20:03:47 +0000 (0:00:00.575) 0:06:59.239 *********** 2025-05-13 20:08:07.189681 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.189687 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.189693 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.189699 | orchestrator | 2025-05-13 20:08:07.189706 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-13 20:08:07.189712 | orchestrator | Tuesday 13 May 2025 20:03:47 +0000 (0:00:00.311) 0:06:59.550 *********** 2025-05-13 20:08:07.189724 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.189731 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.189738 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.189745 | orchestrator | 2025-05-13 20:08:07.189751 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-13 20:08:07.189758 | orchestrator | Tuesday 13 May 2025 20:03:48 +0000 (0:00:00.688) 0:07:00.239 *********** 2025-05-13 20:08:07.189765 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.189771 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.189777 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.189782 | orchestrator | 2025-05-13 20:08:07.189788 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-13 20:08:07.189794 | orchestrator | Tuesday 13 May 2025 20:03:49 +0000 (0:00:00.653) 0:07:00.893 *********** 2025-05-13 20:08:07.189800 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.189807 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.189814 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.189820 | orchestrator | 2025-05-13 20:08:07.189827 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-13 20:08:07.189834 | orchestrator | Tuesday 13 May 2025 20:03:49 +0000 (0:00:00.522) 0:07:01.415 *********** 2025-05-13 20:08:07.189840 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.189847 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.189854 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.189860 | orchestrator | 2025-05-13 20:08:07.189867 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-13 20:08:07.189874 | orchestrator | Tuesday 13 May 2025 20:03:50 +0000 (0:00:00.292) 0:07:01.708 *********** 2025-05-13 20:08:07.189881 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.189887 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.189893 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.189900 | orchestrator | 2025-05-13 20:08:07.189905 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-13 20:08:07.189912 | orchestrator | Tuesday 13 May 2025 20:03:50 +0000 (0:00:00.308) 0:07:02.016 *********** 2025-05-13 20:08:07.189918 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.189924 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.189931 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.189937 | orchestrator | 2025-05-13 20:08:07.189944 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-13 20:08:07.189951 | orchestrator | Tuesday 13 May 2025 20:03:50 +0000 (0:00:00.306) 0:07:02.323 *********** 2025-05-13 20:08:07.189957 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.189964 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.189971 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.189977 | orchestrator | 2025-05-13 20:08:07.189984 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-13 20:08:07.189991 | orchestrator | Tuesday 13 May 2025 20:03:51 +0000 (0:00:00.622) 0:07:02.946 *********** 2025-05-13 20:08:07.189998 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.190004 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.190011 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.190041 | orchestrator | 2025-05-13 20:08:07.190056 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-13 20:08:07.190063 | orchestrator | Tuesday 13 May 2025 20:03:51 +0000 (0:00:00.294) 0:07:03.241 *********** 2025-05-13 20:08:07.190070 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.190076 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.190089 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.190096 | orchestrator | 2025-05-13 20:08:07.190102 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-13 20:08:07.190109 | orchestrator | Tuesday 13 May 2025 20:03:51 +0000 (0:00:00.310) 0:07:03.551 *********** 2025-05-13 20:08:07.190116 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.190123 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.190129 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.190136 | orchestrator | 2025-05-13 20:08:07.190143 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-13 20:08:07.190150 | orchestrator | Tuesday 13 May 2025 20:03:52 +0000 (0:00:00.275) 0:07:03.827 *********** 2025-05-13 20:08:07.190156 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.190163 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.190170 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.190176 | orchestrator | 2025-05-13 20:08:07.190183 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-13 20:08:07.190190 | orchestrator | Tuesday 13 May 2025 20:03:52 +0000 (0:00:00.601) 0:07:04.428 *********** 2025-05-13 20:08:07.190197 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.190203 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.190209 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.190216 | orchestrator | 2025-05-13 20:08:07.190223 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-05-13 20:08:07.190230 | orchestrator | Tuesday 13 May 2025 20:03:53 +0000 (0:00:00.555) 0:07:04.984 *********** 2025-05-13 20:08:07.190237 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.190244 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.190268 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.190275 | orchestrator | 2025-05-13 20:08:07.190280 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-05-13 20:08:07.190287 | orchestrator | Tuesday 13 May 2025 20:03:53 +0000 (0:00:00.300) 0:07:05.285 *********** 2025-05-13 20:08:07.190293 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-13 20:08:07.190300 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-13 20:08:07.190306 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-13 20:08:07.190312 | orchestrator | 2025-05-13 20:08:07.190319 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-05-13 20:08:07.190326 | orchestrator | Tuesday 13 May 2025 20:03:54 +0000 (0:00:00.954) 0:07:06.239 *********** 2025-05-13 20:08:07.190332 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.190339 | orchestrator | 2025-05-13 20:08:07.190346 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-05-13 20:08:07.190358 | orchestrator | Tuesday 13 May 2025 20:03:55 +0000 (0:00:00.867) 0:07:07.107 *********** 2025-05-13 20:08:07.190364 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.190371 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.190378 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.190384 | orchestrator | 2025-05-13 20:08:07.190391 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-05-13 20:08:07.190398 | orchestrator | Tuesday 13 May 2025 20:03:55 +0000 (0:00:00.308) 0:07:07.416 *********** 2025-05-13 20:08:07.190405 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.190411 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.190418 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.190425 | orchestrator | 2025-05-13 20:08:07.190432 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-05-13 20:08:07.190438 | orchestrator | Tuesday 13 May 2025 20:03:56 +0000 (0:00:00.345) 0:07:07.762 *********** 2025-05-13 20:08:07.190445 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.190451 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.190463 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.190469 | orchestrator | 2025-05-13 20:08:07.190476 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-05-13 20:08:07.190482 | orchestrator | Tuesday 13 May 2025 20:03:57 +0000 (0:00:00.829) 0:07:08.591 *********** 2025-05-13 20:08:07.190488 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.190495 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.190501 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.190506 | orchestrator | 2025-05-13 20:08:07.190513 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-05-13 20:08:07.190519 | orchestrator | Tuesday 13 May 2025 20:03:57 +0000 (0:00:00.322) 0:07:08.913 *********** 2025-05-13 20:08:07.190525 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-13 20:08:07.190532 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-13 20:08:07.190537 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-13 20:08:07.190544 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-13 20:08:07.190550 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-13 20:08:07.190556 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-13 20:08:07.190562 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-13 20:08:07.190568 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-13 20:08:07.190584 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-13 20:08:07.190591 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-13 20:08:07.190597 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-13 20:08:07.190603 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-13 20:08:07.190608 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-13 20:08:07.190615 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-13 20:08:07.190621 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-13 20:08:07.190627 | orchestrator | 2025-05-13 20:08:07.190633 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-05-13 20:08:07.190639 | orchestrator | Tuesday 13 May 2025 20:03:59 +0000 (0:00:01.889) 0:07:10.803 *********** 2025-05-13 20:08:07.190645 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.190652 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.190658 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.190665 | orchestrator | 2025-05-13 20:08:07.190671 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-05-13 20:08:07.190677 | orchestrator | Tuesday 13 May 2025 20:03:59 +0000 (0:00:00.293) 0:07:11.097 *********** 2025-05-13 20:08:07.190684 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.190690 | orchestrator | 2025-05-13 20:08:07.190696 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-05-13 20:08:07.190702 | orchestrator | Tuesday 13 May 2025 20:04:00 +0000 (0:00:00.755) 0:07:11.852 *********** 2025-05-13 20:08:07.190708 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-13 20:08:07.190715 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-13 20:08:07.190721 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-13 20:08:07.190727 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-05-13 20:08:07.190740 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-05-13 20:08:07.190746 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-05-13 20:08:07.190753 | orchestrator | 2025-05-13 20:08:07.190759 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-05-13 20:08:07.190766 | orchestrator | Tuesday 13 May 2025 20:04:01 +0000 (0:00:00.931) 0:07:12.784 *********** 2025-05-13 20:08:07.190772 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:08:07.190778 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-13 20:08:07.190784 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-13 20:08:07.190790 | orchestrator | 2025-05-13 20:08:07.190800 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-05-13 20:08:07.190806 | orchestrator | Tuesday 13 May 2025 20:04:03 +0000 (0:00:01.842) 0:07:14.626 *********** 2025-05-13 20:08:07.190812 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-13 20:08:07.190817 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-13 20:08:07.190824 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.190830 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-13 20:08:07.190836 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-13 20:08:07.190842 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.190848 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-13 20:08:07.190855 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-13 20:08:07.190861 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.190868 | orchestrator | 2025-05-13 20:08:07.190874 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-05-13 20:08:07.190881 | orchestrator | Tuesday 13 May 2025 20:04:04 +0000 (0:00:01.376) 0:07:16.002 *********** 2025-05-13 20:08:07.190887 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-13 20:08:07.190893 | orchestrator | 2025-05-13 20:08:07.190900 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-05-13 20:08:07.190906 | orchestrator | Tuesday 13 May 2025 20:04:06 +0000 (0:00:01.978) 0:07:17.981 *********** 2025-05-13 20:08:07.190912 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.190918 | orchestrator | 2025-05-13 20:08:07.190924 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-05-13 20:08:07.190931 | orchestrator | Tuesday 13 May 2025 20:04:06 +0000 (0:00:00.583) 0:07:18.564 *********** 2025-05-13 20:08:07.190938 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c7ef241c-3ce4-53e3-9962-a0236c38cab6', 'data_vg': 'ceph-c7ef241c-3ce4-53e3-9962-a0236c38cab6'}) 2025-05-13 20:08:07.190946 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-eb14b8c1-d757-5b78-a398-3e433d34ee3e', 'data_vg': 'ceph-eb14b8c1-d757-5b78-a398-3e433d34ee3e'}) 2025-05-13 20:08:07.190953 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9e27190a-cad1-5451-a880-ae60fcff608c', 'data_vg': 'ceph-9e27190a-cad1-5451-a880-ae60fcff608c'}) 2025-05-13 20:08:07.190959 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-55d6de5b-857a-5090-90bd-6b26b006e6c2', 'data_vg': 'ceph-55d6de5b-857a-5090-90bd-6b26b006e6c2'}) 2025-05-13 20:08:07.190972 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-53409cd5-715f-5221-bc58-8adc9fe4a6bc', 'data_vg': 'ceph-53409cd5-715f-5221-bc58-8adc9fe4a6bc'}) 2025-05-13 20:08:07.190979 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6f4317e9-8e5a-55d6-81df-460521249898', 'data_vg': 'ceph-6f4317e9-8e5a-55d6-81df-460521249898'}) 2025-05-13 20:08:07.190984 | orchestrator | 2025-05-13 20:08:07.190990 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-05-13 20:08:07.190996 | orchestrator | Tuesday 13 May 2025 20:04:49 +0000 (0:00:42.817) 0:08:01.381 *********** 2025-05-13 20:08:07.191001 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.191012 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.191019 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.191025 | orchestrator | 2025-05-13 20:08:07.191032 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-05-13 20:08:07.191039 | orchestrator | Tuesday 13 May 2025 20:04:50 +0000 (0:00:00.738) 0:08:02.119 *********** 2025-05-13 20:08:07.191046 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.191053 | orchestrator | 2025-05-13 20:08:07.191059 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-05-13 20:08:07.191066 | orchestrator | Tuesday 13 May 2025 20:04:51 +0000 (0:00:00.506) 0:08:02.626 *********** 2025-05-13 20:08:07.191073 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.191080 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.191088 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.191094 | orchestrator | 2025-05-13 20:08:07.191102 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-05-13 20:08:07.191108 | orchestrator | Tuesday 13 May 2025 20:04:51 +0000 (0:00:00.686) 0:08:03.313 *********** 2025-05-13 20:08:07.191115 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.191122 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.191129 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.191135 | orchestrator | 2025-05-13 20:08:07.191142 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-05-13 20:08:07.191149 | orchestrator | Tuesday 13 May 2025 20:04:54 +0000 (0:00:02.713) 0:08:06.027 *********** 2025-05-13 20:08:07.191156 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.191163 | orchestrator | 2025-05-13 20:08:07.191169 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-05-13 20:08:07.191176 | orchestrator | Tuesday 13 May 2025 20:04:54 +0000 (0:00:00.518) 0:08:06.545 *********** 2025-05-13 20:08:07.191183 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.191190 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.191197 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.191204 | orchestrator | 2025-05-13 20:08:07.191211 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-05-13 20:08:07.191218 | orchestrator | Tuesday 13 May 2025 20:04:56 +0000 (0:00:01.173) 0:08:07.719 *********** 2025-05-13 20:08:07.191225 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.191231 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.191238 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.191245 | orchestrator | 2025-05-13 20:08:07.191298 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-05-13 20:08:07.191306 | orchestrator | Tuesday 13 May 2025 20:04:57 +0000 (0:00:01.391) 0:08:09.111 *********** 2025-05-13 20:08:07.191313 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.191320 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.191327 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.191334 | orchestrator | 2025-05-13 20:08:07.191341 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-05-13 20:08:07.191348 | orchestrator | Tuesday 13 May 2025 20:04:59 +0000 (0:00:01.640) 0:08:10.751 *********** 2025-05-13 20:08:07.191354 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.191361 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.191368 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.191374 | orchestrator | 2025-05-13 20:08:07.191380 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-05-13 20:08:07.191386 | orchestrator | Tuesday 13 May 2025 20:04:59 +0000 (0:00:00.322) 0:08:11.073 *********** 2025-05-13 20:08:07.191391 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.191397 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.191403 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.191409 | orchestrator | 2025-05-13 20:08:07.191425 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-05-13 20:08:07.191432 | orchestrator | Tuesday 13 May 2025 20:04:59 +0000 (0:00:00.292) 0:08:11.365 *********** 2025-05-13 20:08:07.191439 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-13 20:08:07.191445 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-05-13 20:08:07.191452 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-05-13 20:08:07.191458 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-05-13 20:08:07.191465 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-05-13 20:08:07.191471 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-05-13 20:08:07.191477 | orchestrator | 2025-05-13 20:08:07.191483 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-05-13 20:08:07.191490 | orchestrator | Tuesday 13 May 2025 20:05:01 +0000 (0:00:01.310) 0:08:12.675 *********** 2025-05-13 20:08:07.191496 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-05-13 20:08:07.191502 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-05-13 20:08:07.191509 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-05-13 20:08:07.191516 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-05-13 20:08:07.191522 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-05-13 20:08:07.191529 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-05-13 20:08:07.191536 | orchestrator | 2025-05-13 20:08:07.191543 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-05-13 20:08:07.191550 | orchestrator | Tuesday 13 May 2025 20:05:03 +0000 (0:00:02.220) 0:08:14.896 *********** 2025-05-13 20:08:07.191556 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-05-13 20:08:07.191563 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-05-13 20:08:07.191577 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-05-13 20:08:07.191585 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-05-13 20:08:07.191591 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-05-13 20:08:07.191598 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-05-13 20:08:07.191605 | orchestrator | 2025-05-13 20:08:07.191612 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-05-13 20:08:07.191619 | orchestrator | Tuesday 13 May 2025 20:05:06 +0000 (0:00:03.454) 0:08:18.350 *********** 2025-05-13 20:08:07.191625 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.191631 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.191638 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-13 20:08:07.191645 | orchestrator | 2025-05-13 20:08:07.191651 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-05-13 20:08:07.191658 | orchestrator | Tuesday 13 May 2025 20:05:09 +0000 (0:00:02.519) 0:08:20.869 *********** 2025-05-13 20:08:07.191665 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.191672 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.191678 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-05-13 20:08:07.191685 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-13 20:08:07.191692 | orchestrator | 2025-05-13 20:08:07.191698 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-05-13 20:08:07.191736 | orchestrator | Tuesday 13 May 2025 20:05:22 +0000 (0:00:12.739) 0:08:33.609 *********** 2025-05-13 20:08:07.191744 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.191751 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.191758 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.191765 | orchestrator | 2025-05-13 20:08:07.191771 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-13 20:08:07.191777 | orchestrator | Tuesday 13 May 2025 20:05:22 +0000 (0:00:00.812) 0:08:34.421 *********** 2025-05-13 20:08:07.191783 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.191788 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.191794 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.191806 | orchestrator | 2025-05-13 20:08:07.191813 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-05-13 20:08:07.191820 | orchestrator | Tuesday 13 May 2025 20:05:23 +0000 (0:00:00.545) 0:08:34.967 *********** 2025-05-13 20:08:07.191827 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.191834 | orchestrator | 2025-05-13 20:08:07.191841 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-05-13 20:08:07.191848 | orchestrator | Tuesday 13 May 2025 20:05:23 +0000 (0:00:00.510) 0:08:35.478 *********** 2025-05-13 20:08:07.191854 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 20:08:07.191860 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 20:08:07.191866 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 20:08:07.191872 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.191879 | orchestrator | 2025-05-13 20:08:07.191888 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-05-13 20:08:07.191895 | orchestrator | Tuesday 13 May 2025 20:05:24 +0000 (0:00:00.364) 0:08:35.842 *********** 2025-05-13 20:08:07.191901 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.191907 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.191913 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.191919 | orchestrator | 2025-05-13 20:08:07.191926 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-05-13 20:08:07.191933 | orchestrator | Tuesday 13 May 2025 20:05:24 +0000 (0:00:00.287) 0:08:36.130 *********** 2025-05-13 20:08:07.191938 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.191944 | orchestrator | 2025-05-13 20:08:07.191951 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-05-13 20:08:07.191957 | orchestrator | Tuesday 13 May 2025 20:05:24 +0000 (0:00:00.204) 0:08:36.334 *********** 2025-05-13 20:08:07.191963 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.191970 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.191976 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.191983 | orchestrator | 2025-05-13 20:08:07.191989 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-05-13 20:08:07.191996 | orchestrator | Tuesday 13 May 2025 20:05:25 +0000 (0:00:00.544) 0:08:36.879 *********** 2025-05-13 20:08:07.192002 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.192009 | orchestrator | 2025-05-13 20:08:07.192016 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-05-13 20:08:07.192023 | orchestrator | Tuesday 13 May 2025 20:05:25 +0000 (0:00:00.220) 0:08:37.099 *********** 2025-05-13 20:08:07.192029 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.192036 | orchestrator | 2025-05-13 20:08:07.192042 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-05-13 20:08:07.192048 | orchestrator | Tuesday 13 May 2025 20:05:25 +0000 (0:00:00.227) 0:08:37.327 *********** 2025-05-13 20:08:07.192055 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.192061 | orchestrator | 2025-05-13 20:08:07.192067 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-05-13 20:08:07.192074 | orchestrator | Tuesday 13 May 2025 20:05:25 +0000 (0:00:00.117) 0:08:37.444 *********** 2025-05-13 20:08:07.192080 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.192086 | orchestrator | 2025-05-13 20:08:07.192092 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-05-13 20:08:07.192097 | orchestrator | Tuesday 13 May 2025 20:05:26 +0000 (0:00:00.211) 0:08:37.656 *********** 2025-05-13 20:08:07.192104 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.192110 | orchestrator | 2025-05-13 20:08:07.192117 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-05-13 20:08:07.192124 | orchestrator | Tuesday 13 May 2025 20:05:26 +0000 (0:00:00.255) 0:08:37.911 *********** 2025-05-13 20:08:07.192138 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 20:08:07.192151 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 20:08:07.192158 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 20:08:07.192164 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.192171 | orchestrator | 2025-05-13 20:08:07.192178 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-05-13 20:08:07.192185 | orchestrator | Tuesday 13 May 2025 20:05:26 +0000 (0:00:00.393) 0:08:38.305 *********** 2025-05-13 20:08:07.192192 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.192198 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.192205 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.192212 | orchestrator | 2025-05-13 20:08:07.192219 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-05-13 20:08:07.192226 | orchestrator | Tuesday 13 May 2025 20:05:27 +0000 (0:00:00.283) 0:08:38.589 *********** 2025-05-13 20:08:07.192232 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.192239 | orchestrator | 2025-05-13 20:08:07.192246 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-05-13 20:08:07.192274 | orchestrator | Tuesday 13 May 2025 20:05:27 +0000 (0:00:00.828) 0:08:39.418 *********** 2025-05-13 20:08:07.192280 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.192287 | orchestrator | 2025-05-13 20:08:07.192294 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-05-13 20:08:07.192300 | orchestrator | 2025-05-13 20:08:07.192307 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-13 20:08:07.192314 | orchestrator | Tuesday 13 May 2025 20:05:28 +0000 (0:00:00.721) 0:08:40.139 *********** 2025-05-13 20:08:07.192321 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.192329 | orchestrator | 2025-05-13 20:08:07.192335 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-13 20:08:07.192342 | orchestrator | Tuesday 13 May 2025 20:05:29 +0000 (0:00:01.209) 0:08:41.349 *********** 2025-05-13 20:08:07.192348 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.192355 | orchestrator | 2025-05-13 20:08:07.192361 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-13 20:08:07.192368 | orchestrator | Tuesday 13 May 2025 20:05:30 +0000 (0:00:01.216) 0:08:42.566 *********** 2025-05-13 20:08:07.192375 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.192382 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.192388 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.192395 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.192402 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.192409 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.192415 | orchestrator | 2025-05-13 20:08:07.192422 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-13 20:08:07.192433 | orchestrator | Tuesday 13 May 2025 20:05:31 +0000 (0:00:00.824) 0:08:43.391 *********** 2025-05-13 20:08:07.192440 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.192447 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.192453 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.192460 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.192467 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.192474 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.192481 | orchestrator | 2025-05-13 20:08:07.192487 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-13 20:08:07.192494 | orchestrator | Tuesday 13 May 2025 20:05:32 +0000 (0:00:00.993) 0:08:44.384 *********** 2025-05-13 20:08:07.192501 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.192508 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.192519 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.192526 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.192533 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.192539 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.192545 | orchestrator | 2025-05-13 20:08:07.192552 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-13 20:08:07.192559 | orchestrator | Tuesday 13 May 2025 20:05:34 +0000 (0:00:01.257) 0:08:45.642 *********** 2025-05-13 20:08:07.192566 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.192573 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.192579 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.192586 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.192592 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.192598 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.192605 | orchestrator | 2025-05-13 20:08:07.192611 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-13 20:08:07.192617 | orchestrator | Tuesday 13 May 2025 20:05:35 +0000 (0:00:01.009) 0:08:46.651 *********** 2025-05-13 20:08:07.192624 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.192630 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.192636 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.192643 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.192650 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.192656 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.192663 | orchestrator | 2025-05-13 20:08:07.192669 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-13 20:08:07.192676 | orchestrator | Tuesday 13 May 2025 20:05:36 +0000 (0:00:01.019) 0:08:47.671 *********** 2025-05-13 20:08:07.192682 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.192689 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.192695 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.192702 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.192708 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.192715 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.192721 | orchestrator | 2025-05-13 20:08:07.192728 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-13 20:08:07.192734 | orchestrator | Tuesday 13 May 2025 20:05:36 +0000 (0:00:00.632) 0:08:48.303 *********** 2025-05-13 20:08:07.192746 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.192753 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.192760 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.192767 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.192773 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.192778 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.192784 | orchestrator | 2025-05-13 20:08:07.192790 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-13 20:08:07.192797 | orchestrator | Tuesday 13 May 2025 20:05:37 +0000 (0:00:00.833) 0:08:49.137 *********** 2025-05-13 20:08:07.192802 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.192808 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.192814 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.192819 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.192825 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.192831 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.192837 | orchestrator | 2025-05-13 20:08:07.192843 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-13 20:08:07.192849 | orchestrator | Tuesday 13 May 2025 20:05:38 +0000 (0:00:01.121) 0:08:50.259 *********** 2025-05-13 20:08:07.192855 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.192860 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.192866 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.192872 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.192878 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.192884 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.192890 | orchestrator | 2025-05-13 20:08:07.192900 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-13 20:08:07.192906 | orchestrator | Tuesday 13 May 2025 20:05:39 +0000 (0:00:01.225) 0:08:51.484 *********** 2025-05-13 20:08:07.192912 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.192919 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.192925 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.192932 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.192938 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.192944 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.192949 | orchestrator | 2025-05-13 20:08:07.192956 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-13 20:08:07.192962 | orchestrator | Tuesday 13 May 2025 20:05:40 +0000 (0:00:00.580) 0:08:52.066 *********** 2025-05-13 20:08:07.192968 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.192974 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.192980 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.192986 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.192992 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.192997 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.193003 | orchestrator | 2025-05-13 20:08:07.193010 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-13 20:08:07.193016 | orchestrator | Tuesday 13 May 2025 20:05:41 +0000 (0:00:00.782) 0:08:52.848 *********** 2025-05-13 20:08:07.193022 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.193028 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.193034 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.193041 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.193047 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.193053 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.193059 | orchestrator | 2025-05-13 20:08:07.193065 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-13 20:08:07.193076 | orchestrator | Tuesday 13 May 2025 20:05:41 +0000 (0:00:00.622) 0:08:53.471 *********** 2025-05-13 20:08:07.193082 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.193088 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.193094 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.193101 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.193108 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.193114 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.193120 | orchestrator | 2025-05-13 20:08:07.193127 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-13 20:08:07.193133 | orchestrator | Tuesday 13 May 2025 20:05:42 +0000 (0:00:00.789) 0:08:54.260 *********** 2025-05-13 20:08:07.193138 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.193144 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.193150 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.193156 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.193162 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.193168 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.193174 | orchestrator | 2025-05-13 20:08:07.193180 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-13 20:08:07.193186 | orchestrator | Tuesday 13 May 2025 20:05:43 +0000 (0:00:00.633) 0:08:54.894 *********** 2025-05-13 20:08:07.193193 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.193198 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.193204 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.193210 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.193216 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.193222 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.193227 | orchestrator | 2025-05-13 20:08:07.193233 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-13 20:08:07.193240 | orchestrator | Tuesday 13 May 2025 20:05:44 +0000 (0:00:00.811) 0:08:55.706 *********** 2025-05-13 20:08:07.193245 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:08:07.193303 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:08:07.193310 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:08:07.193316 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.193321 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.193327 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.193334 | orchestrator | 2025-05-13 20:08:07.193339 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-13 20:08:07.193346 | orchestrator | Tuesday 13 May 2025 20:05:44 +0000 (0:00:00.563) 0:08:56.270 *********** 2025-05-13 20:08:07.193351 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.193357 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.193364 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.193369 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.193375 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.193381 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.193387 | orchestrator | 2025-05-13 20:08:07.193393 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-13 20:08:07.193408 | orchestrator | Tuesday 13 May 2025 20:05:45 +0000 (0:00:00.753) 0:08:57.024 *********** 2025-05-13 20:08:07.193414 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.193420 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.193427 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.193432 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.193438 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.193444 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.193450 | orchestrator | 2025-05-13 20:08:07.193456 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-13 20:08:07.193462 | orchestrator | Tuesday 13 May 2025 20:05:46 +0000 (0:00:00.616) 0:08:57.640 *********** 2025-05-13 20:08:07.193468 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.193475 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.193482 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.193489 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.193494 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.193501 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.193507 | orchestrator | 2025-05-13 20:08:07.193511 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-05-13 20:08:07.193515 | orchestrator | Tuesday 13 May 2025 20:05:47 +0000 (0:00:01.210) 0:08:58.850 *********** 2025-05-13 20:08:07.193519 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.193523 | orchestrator | 2025-05-13 20:08:07.193526 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-05-13 20:08:07.193530 | orchestrator | Tuesday 13 May 2025 20:05:51 +0000 (0:00:03.859) 0:09:02.710 *********** 2025-05-13 20:08:07.193534 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.193538 | orchestrator | 2025-05-13 20:08:07.193542 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-05-13 20:08:07.193545 | orchestrator | Tuesday 13 May 2025 20:05:53 +0000 (0:00:01.929) 0:09:04.639 *********** 2025-05-13 20:08:07.193549 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.193553 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.193557 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.193560 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.193564 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.193568 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.193572 | orchestrator | 2025-05-13 20:08:07.193575 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-05-13 20:08:07.193579 | orchestrator | Tuesday 13 May 2025 20:05:54 +0000 (0:00:01.791) 0:09:06.431 *********** 2025-05-13 20:08:07.193583 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.193587 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.193590 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.193594 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.193598 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.193602 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.193610 | orchestrator | 2025-05-13 20:08:07.193614 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-05-13 20:08:07.193618 | orchestrator | Tuesday 13 May 2025 20:05:55 +0000 (0:00:00.931) 0:09:07.362 *********** 2025-05-13 20:08:07.193622 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.193628 | orchestrator | 2025-05-13 20:08:07.193631 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-05-13 20:08:07.193639 | orchestrator | Tuesday 13 May 2025 20:05:57 +0000 (0:00:01.212) 0:09:08.575 *********** 2025-05-13 20:08:07.193643 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.193647 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.193651 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.193654 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.193658 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.193662 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.193665 | orchestrator | 2025-05-13 20:08:07.193669 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-05-13 20:08:07.193673 | orchestrator | Tuesday 13 May 2025 20:05:58 +0000 (0:00:01.715) 0:09:10.291 *********** 2025-05-13 20:08:07.193677 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.193681 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.193684 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.193688 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.193692 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.193695 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.193699 | orchestrator | 2025-05-13 20:08:07.193703 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-05-13 20:08:07.193706 | orchestrator | Tuesday 13 May 2025 20:06:01 +0000 (0:00:03.133) 0:09:13.424 *********** 2025-05-13 20:08:07.193711 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.193714 | orchestrator | 2025-05-13 20:08:07.193718 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-05-13 20:08:07.193722 | orchestrator | Tuesday 13 May 2025 20:06:03 +0000 (0:00:01.221) 0:09:14.646 *********** 2025-05-13 20:08:07.193726 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.193730 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.193733 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.193737 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.193741 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.193745 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.193748 | orchestrator | 2025-05-13 20:08:07.193752 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-05-13 20:08:07.193756 | orchestrator | Tuesday 13 May 2025 20:06:03 +0000 (0:00:00.787) 0:09:15.434 *********** 2025-05-13 20:08:07.193760 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:08:07.193763 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:08:07.193767 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.193774 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:08:07.193780 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.193786 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.193791 | orchestrator | 2025-05-13 20:08:07.193797 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-05-13 20:08:07.193803 | orchestrator | Tuesday 13 May 2025 20:06:06 +0000 (0:00:02.209) 0:09:17.643 *********** 2025-05-13 20:08:07.193815 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:08:07.193821 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:08:07.193827 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:08:07.193833 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.193840 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.193845 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.193854 | orchestrator | 2025-05-13 20:08:07.193858 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-05-13 20:08:07.193861 | orchestrator | 2025-05-13 20:08:07.193865 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-13 20:08:07.193869 | orchestrator | Tuesday 13 May 2025 20:06:07 +0000 (0:00:01.096) 0:09:18.739 *********** 2025-05-13 20:08:07.193873 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.193877 | orchestrator | 2025-05-13 20:08:07.193880 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-13 20:08:07.193884 | orchestrator | Tuesday 13 May 2025 20:06:07 +0000 (0:00:00.548) 0:09:19.288 *********** 2025-05-13 20:08:07.193888 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.193892 | orchestrator | 2025-05-13 20:08:07.193895 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-13 20:08:07.193899 | orchestrator | Tuesday 13 May 2025 20:06:08 +0000 (0:00:00.848) 0:09:20.136 *********** 2025-05-13 20:08:07.193902 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.193906 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.193910 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.193913 | orchestrator | 2025-05-13 20:08:07.193917 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-13 20:08:07.193921 | orchestrator | Tuesday 13 May 2025 20:06:08 +0000 (0:00:00.336) 0:09:20.473 *********** 2025-05-13 20:08:07.193924 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.193928 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.193932 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.193936 | orchestrator | 2025-05-13 20:08:07.193939 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-13 20:08:07.193943 | orchestrator | Tuesday 13 May 2025 20:06:09 +0000 (0:00:00.729) 0:09:21.202 *********** 2025-05-13 20:08:07.193947 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.193950 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.193954 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.193958 | orchestrator | 2025-05-13 20:08:07.193961 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-13 20:08:07.193965 | orchestrator | Tuesday 13 May 2025 20:06:10 +0000 (0:00:01.023) 0:09:22.225 *********** 2025-05-13 20:08:07.193969 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.193973 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.193976 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.193980 | orchestrator | 2025-05-13 20:08:07.193984 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-13 20:08:07.193987 | orchestrator | Tuesday 13 May 2025 20:06:11 +0000 (0:00:00.705) 0:09:22.931 *********** 2025-05-13 20:08:07.193991 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.193995 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.194001 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.194004 | orchestrator | 2025-05-13 20:08:07.194008 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-13 20:08:07.194050 | orchestrator | Tuesday 13 May 2025 20:06:11 +0000 (0:00:00.296) 0:09:23.227 *********** 2025-05-13 20:08:07.194055 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.194059 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.194062 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.194066 | orchestrator | 2025-05-13 20:08:07.194070 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-13 20:08:07.194073 | orchestrator | Tuesday 13 May 2025 20:06:11 +0000 (0:00:00.286) 0:09:23.514 *********** 2025-05-13 20:08:07.194077 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.194081 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.194084 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.194088 | orchestrator | 2025-05-13 20:08:07.194096 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-13 20:08:07.194099 | orchestrator | Tuesday 13 May 2025 20:06:12 +0000 (0:00:00.620) 0:09:24.135 *********** 2025-05-13 20:08:07.194103 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.194107 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.194111 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.194114 | orchestrator | 2025-05-13 20:08:07.194118 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-13 20:08:07.194122 | orchestrator | Tuesday 13 May 2025 20:06:13 +0000 (0:00:00.735) 0:09:24.870 *********** 2025-05-13 20:08:07.194125 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.194129 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.194133 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.194136 | orchestrator | 2025-05-13 20:08:07.194140 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-13 20:08:07.194144 | orchestrator | Tuesday 13 May 2025 20:06:14 +0000 (0:00:00.734) 0:09:25.605 *********** 2025-05-13 20:08:07.194147 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.194151 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.194155 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.194158 | orchestrator | 2025-05-13 20:08:07.194162 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-13 20:08:07.194166 | orchestrator | Tuesday 13 May 2025 20:06:14 +0000 (0:00:00.345) 0:09:25.950 *********** 2025-05-13 20:08:07.194169 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.194173 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.194176 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.194180 | orchestrator | 2025-05-13 20:08:07.194184 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-13 20:08:07.194187 | orchestrator | Tuesday 13 May 2025 20:06:14 +0000 (0:00:00.593) 0:09:26.544 *********** 2025-05-13 20:08:07.194191 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.194195 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.194198 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.194202 | orchestrator | 2025-05-13 20:08:07.194210 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-13 20:08:07.194214 | orchestrator | Tuesday 13 May 2025 20:06:15 +0000 (0:00:00.345) 0:09:26.889 *********** 2025-05-13 20:08:07.194218 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.194222 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.194225 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.194229 | orchestrator | 2025-05-13 20:08:07.194233 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-13 20:08:07.194236 | orchestrator | Tuesday 13 May 2025 20:06:15 +0000 (0:00:00.398) 0:09:27.287 *********** 2025-05-13 20:08:07.194240 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.194244 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.194264 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.194270 | orchestrator | 2025-05-13 20:08:07.194274 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-13 20:08:07.194277 | orchestrator | Tuesday 13 May 2025 20:06:16 +0000 (0:00:00.338) 0:09:27.626 *********** 2025-05-13 20:08:07.194281 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.194285 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.194289 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.194292 | orchestrator | 2025-05-13 20:08:07.194296 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-13 20:08:07.194300 | orchestrator | Tuesday 13 May 2025 20:06:16 +0000 (0:00:00.611) 0:09:28.237 *********** 2025-05-13 20:08:07.194303 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.194307 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.194311 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.194315 | orchestrator | 2025-05-13 20:08:07.194318 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-13 20:08:07.194328 | orchestrator | Tuesday 13 May 2025 20:06:16 +0000 (0:00:00.295) 0:09:28.532 *********** 2025-05-13 20:08:07.194332 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.194336 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.194340 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.194343 | orchestrator | 2025-05-13 20:08:07.194347 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-13 20:08:07.194351 | orchestrator | Tuesday 13 May 2025 20:06:17 +0000 (0:00:00.286) 0:09:28.819 *********** 2025-05-13 20:08:07.194354 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.194358 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.194362 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.194366 | orchestrator | 2025-05-13 20:08:07.194369 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-13 20:08:07.194373 | orchestrator | Tuesday 13 May 2025 20:06:17 +0000 (0:00:00.321) 0:09:29.140 *********** 2025-05-13 20:08:07.194377 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.194380 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.194384 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.194388 | orchestrator | 2025-05-13 20:08:07.194391 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-05-13 20:08:07.194395 | orchestrator | Tuesday 13 May 2025 20:06:18 +0000 (0:00:00.846) 0:09:29.986 *********** 2025-05-13 20:08:07.194399 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.194402 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.194409 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-05-13 20:08:07.194413 | orchestrator | 2025-05-13 20:08:07.194417 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-05-13 20:08:07.194421 | orchestrator | Tuesday 13 May 2025 20:06:18 +0000 (0:00:00.440) 0:09:30.427 *********** 2025-05-13 20:08:07.194424 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-13 20:08:07.194428 | orchestrator | 2025-05-13 20:08:07.194432 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-05-13 20:08:07.194435 | orchestrator | Tuesday 13 May 2025 20:06:20 +0000 (0:00:02.021) 0:09:32.448 *********** 2025-05-13 20:08:07.194441 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-05-13 20:08:07.194447 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.194450 | orchestrator | 2025-05-13 20:08:07.194454 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-05-13 20:08:07.194458 | orchestrator | Tuesday 13 May 2025 20:06:21 +0000 (0:00:00.223) 0:09:32.672 *********** 2025-05-13 20:08:07.194463 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-13 20:08:07.194472 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-13 20:08:07.194476 | orchestrator | 2025-05-13 20:08:07.194480 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-05-13 20:08:07.194484 | orchestrator | Tuesday 13 May 2025 20:06:29 +0000 (0:00:08.602) 0:09:41.275 *********** 2025-05-13 20:08:07.194487 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-13 20:08:07.194491 | orchestrator | 2025-05-13 20:08:07.194495 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-05-13 20:08:07.194499 | orchestrator | Tuesday 13 May 2025 20:06:33 +0000 (0:00:03.595) 0:09:44.870 *********** 2025-05-13 20:08:07.194502 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.194509 | orchestrator | 2025-05-13 20:08:07.194516 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-05-13 20:08:07.194520 | orchestrator | Tuesday 13 May 2025 20:06:33 +0000 (0:00:00.651) 0:09:45.521 *********** 2025-05-13 20:08:07.194523 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-13 20:08:07.194527 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-13 20:08:07.194531 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-13 20:08:07.194534 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-05-13 20:08:07.194538 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-05-13 20:08:07.194542 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-05-13 20:08:07.194546 | orchestrator | 2025-05-13 20:08:07.194549 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-05-13 20:08:07.194553 | orchestrator | Tuesday 13 May 2025 20:06:35 +0000 (0:00:01.179) 0:09:46.702 *********** 2025-05-13 20:08:07.194557 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:08:07.194560 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-13 20:08:07.194564 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-13 20:08:07.194568 | orchestrator | 2025-05-13 20:08:07.194571 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-05-13 20:08:07.194575 | orchestrator | Tuesday 13 May 2025 20:06:37 +0000 (0:00:02.457) 0:09:49.159 *********** 2025-05-13 20:08:07.194579 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-13 20:08:07.194582 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-13 20:08:07.194586 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.194590 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-13 20:08:07.194593 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-13 20:08:07.194597 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.194601 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-13 20:08:07.194604 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-13 20:08:07.194608 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.194612 | orchestrator | 2025-05-13 20:08:07.194615 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-05-13 20:08:07.194619 | orchestrator | Tuesday 13 May 2025 20:06:39 +0000 (0:00:02.042) 0:09:51.201 *********** 2025-05-13 20:08:07.194623 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.194626 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.194630 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.194634 | orchestrator | 2025-05-13 20:08:07.194637 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-05-13 20:08:07.194641 | orchestrator | Tuesday 13 May 2025 20:06:42 +0000 (0:00:02.730) 0:09:53.932 *********** 2025-05-13 20:08:07.194645 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.194648 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.194652 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.194655 | orchestrator | 2025-05-13 20:08:07.194661 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-05-13 20:08:07.194665 | orchestrator | Tuesday 13 May 2025 20:06:42 +0000 (0:00:00.336) 0:09:54.268 *********** 2025-05-13 20:08:07.194669 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.194673 | orchestrator | 2025-05-13 20:08:07.194677 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-05-13 20:08:07.194680 | orchestrator | Tuesday 13 May 2025 20:06:43 +0000 (0:00:00.809) 0:09:55.077 *********** 2025-05-13 20:08:07.194684 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.194691 | orchestrator | 2025-05-13 20:08:07.194695 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-05-13 20:08:07.194698 | orchestrator | Tuesday 13 May 2025 20:06:44 +0000 (0:00:00.526) 0:09:55.603 *********** 2025-05-13 20:08:07.194702 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.194706 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.194709 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.194713 | orchestrator | 2025-05-13 20:08:07.194717 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-05-13 20:08:07.194720 | orchestrator | Tuesday 13 May 2025 20:06:45 +0000 (0:00:01.237) 0:09:56.841 *********** 2025-05-13 20:08:07.194724 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.194728 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.194731 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.194735 | orchestrator | 2025-05-13 20:08:07.194739 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-05-13 20:08:07.194742 | orchestrator | Tuesday 13 May 2025 20:06:46 +0000 (0:00:01.410) 0:09:58.251 *********** 2025-05-13 20:08:07.194746 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.194750 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.194753 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.194757 | orchestrator | 2025-05-13 20:08:07.194761 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-05-13 20:08:07.194764 | orchestrator | Tuesday 13 May 2025 20:06:48 +0000 (0:00:01.815) 0:10:00.067 *********** 2025-05-13 20:08:07.194768 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.194775 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.194781 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.194787 | orchestrator | 2025-05-13 20:08:07.194793 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-05-13 20:08:07.194799 | orchestrator | Tuesday 13 May 2025 20:06:50 +0000 (0:00:02.002) 0:10:02.069 *********** 2025-05-13 20:08:07.194805 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.194811 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.194818 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.194824 | orchestrator | 2025-05-13 20:08:07.194830 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-13 20:08:07.194834 | orchestrator | Tuesday 13 May 2025 20:06:52 +0000 (0:00:01.557) 0:10:03.627 *********** 2025-05-13 20:08:07.194838 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.194842 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.194845 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.194849 | orchestrator | 2025-05-13 20:08:07.194852 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-05-13 20:08:07.194856 | orchestrator | Tuesday 13 May 2025 20:06:52 +0000 (0:00:00.671) 0:10:04.298 *********** 2025-05-13 20:08:07.194860 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.194864 | orchestrator | 2025-05-13 20:08:07.194867 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-05-13 20:08:07.194871 | orchestrator | Tuesday 13 May 2025 20:06:53 +0000 (0:00:00.758) 0:10:05.056 *********** 2025-05-13 20:08:07.194875 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.194878 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.194882 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.194886 | orchestrator | 2025-05-13 20:08:07.194889 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-05-13 20:08:07.194893 | orchestrator | Tuesday 13 May 2025 20:06:53 +0000 (0:00:00.314) 0:10:05.370 *********** 2025-05-13 20:08:07.194897 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.194900 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.194904 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.194908 | orchestrator | 2025-05-13 20:08:07.194915 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-05-13 20:08:07.194919 | orchestrator | Tuesday 13 May 2025 20:06:54 +0000 (0:00:01.189) 0:10:06.560 *********** 2025-05-13 20:08:07.194923 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 20:08:07.194926 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 20:08:07.194930 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 20:08:07.194934 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.194937 | orchestrator | 2025-05-13 20:08:07.194941 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-05-13 20:08:07.194945 | orchestrator | Tuesday 13 May 2025 20:06:55 +0000 (0:00:00.948) 0:10:07.509 *********** 2025-05-13 20:08:07.194949 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.194952 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.194956 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.194960 | orchestrator | 2025-05-13 20:08:07.194963 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-13 20:08:07.194967 | orchestrator | 2025-05-13 20:08:07.194971 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-13 20:08:07.194974 | orchestrator | Tuesday 13 May 2025 20:06:56 +0000 (0:00:00.802) 0:10:08.311 *********** 2025-05-13 20:08:07.194978 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.194982 | orchestrator | 2025-05-13 20:08:07.194986 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-13 20:08:07.194992 | orchestrator | Tuesday 13 May 2025 20:06:57 +0000 (0:00:00.492) 0:10:08.803 *********** 2025-05-13 20:08:07.194996 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.194999 | orchestrator | 2025-05-13 20:08:07.195003 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-13 20:08:07.195007 | orchestrator | Tuesday 13 May 2025 20:06:58 +0000 (0:00:00.769) 0:10:09.573 *********** 2025-05-13 20:08:07.195011 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.195014 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.195018 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.195022 | orchestrator | 2025-05-13 20:08:07.195025 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-13 20:08:07.195029 | orchestrator | Tuesday 13 May 2025 20:06:58 +0000 (0:00:00.313) 0:10:09.887 *********** 2025-05-13 20:08:07.195033 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.195037 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.195040 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.195044 | orchestrator | 2025-05-13 20:08:07.195048 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-13 20:08:07.195052 | orchestrator | Tuesday 13 May 2025 20:06:59 +0000 (0:00:00.709) 0:10:10.596 *********** 2025-05-13 20:08:07.195055 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.195059 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.195063 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.195066 | orchestrator | 2025-05-13 20:08:07.195070 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-13 20:08:07.195074 | orchestrator | Tuesday 13 May 2025 20:06:59 +0000 (0:00:00.711) 0:10:11.308 *********** 2025-05-13 20:08:07.195077 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.195081 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.195085 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.195088 | orchestrator | 2025-05-13 20:08:07.195092 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-13 20:08:07.195096 | orchestrator | Tuesday 13 May 2025 20:07:00 +0000 (0:00:01.022) 0:10:12.330 *********** 2025-05-13 20:08:07.195099 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.195103 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.195110 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.195113 | orchestrator | 2025-05-13 20:08:07.195117 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-13 20:08:07.195121 | orchestrator | Tuesday 13 May 2025 20:07:01 +0000 (0:00:00.315) 0:10:12.646 *********** 2025-05-13 20:08:07.195125 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.195128 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.195132 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.195136 | orchestrator | 2025-05-13 20:08:07.195139 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-13 20:08:07.195146 | orchestrator | Tuesday 13 May 2025 20:07:01 +0000 (0:00:00.279) 0:10:12.925 *********** 2025-05-13 20:08:07.195149 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.195153 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.195157 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.195160 | orchestrator | 2025-05-13 20:08:07.195164 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-13 20:08:07.195168 | orchestrator | Tuesday 13 May 2025 20:07:01 +0000 (0:00:00.269) 0:10:13.195 *********** 2025-05-13 20:08:07.195172 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.195175 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.195179 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.195183 | orchestrator | 2025-05-13 20:08:07.195186 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-13 20:08:07.195190 | orchestrator | Tuesday 13 May 2025 20:07:02 +0000 (0:00:00.998) 0:10:14.194 *********** 2025-05-13 20:08:07.195194 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.195198 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.195201 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.195205 | orchestrator | 2025-05-13 20:08:07.195209 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-13 20:08:07.195213 | orchestrator | Tuesday 13 May 2025 20:07:03 +0000 (0:00:00.705) 0:10:14.899 *********** 2025-05-13 20:08:07.195216 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.195220 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.195224 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.195227 | orchestrator | 2025-05-13 20:08:07.195231 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-13 20:08:07.195235 | orchestrator | Tuesday 13 May 2025 20:07:03 +0000 (0:00:00.292) 0:10:15.191 *********** 2025-05-13 20:08:07.195239 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.195242 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.195260 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.195268 | orchestrator | 2025-05-13 20:08:07.195274 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-13 20:08:07.195281 | orchestrator | Tuesday 13 May 2025 20:07:03 +0000 (0:00:00.293) 0:10:15.484 *********** 2025-05-13 20:08:07.195288 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.195294 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.195300 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.195305 | orchestrator | 2025-05-13 20:08:07.195309 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-13 20:08:07.195313 | orchestrator | Tuesday 13 May 2025 20:07:04 +0000 (0:00:00.582) 0:10:16.067 *********** 2025-05-13 20:08:07.195317 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.195320 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.195324 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.195328 | orchestrator | 2025-05-13 20:08:07.195332 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-13 20:08:07.195335 | orchestrator | Tuesday 13 May 2025 20:07:04 +0000 (0:00:00.336) 0:10:16.404 *********** 2025-05-13 20:08:07.195339 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.195343 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.195346 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.195350 | orchestrator | 2025-05-13 20:08:07.195354 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-13 20:08:07.195360 | orchestrator | Tuesday 13 May 2025 20:07:05 +0000 (0:00:00.305) 0:10:16.709 *********** 2025-05-13 20:08:07.195367 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.195371 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.195374 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.195378 | orchestrator | 2025-05-13 20:08:07.195382 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-13 20:08:07.195386 | orchestrator | Tuesday 13 May 2025 20:07:05 +0000 (0:00:00.303) 0:10:17.013 *********** 2025-05-13 20:08:07.195389 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.195393 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.195397 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.195401 | orchestrator | 2025-05-13 20:08:07.195404 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-13 20:08:07.195408 | orchestrator | Tuesday 13 May 2025 20:07:05 +0000 (0:00:00.547) 0:10:17.561 *********** 2025-05-13 20:08:07.195412 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.195416 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.195419 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.195423 | orchestrator | 2025-05-13 20:08:07.195427 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-13 20:08:07.195430 | orchestrator | Tuesday 13 May 2025 20:07:06 +0000 (0:00:00.300) 0:10:17.861 *********** 2025-05-13 20:08:07.195434 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.195438 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.195442 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.195445 | orchestrator | 2025-05-13 20:08:07.195449 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-13 20:08:07.195453 | orchestrator | Tuesday 13 May 2025 20:07:06 +0000 (0:00:00.316) 0:10:18.177 *********** 2025-05-13 20:08:07.195456 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.195460 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.195464 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.195467 | orchestrator | 2025-05-13 20:08:07.195471 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-05-13 20:08:07.195475 | orchestrator | Tuesday 13 May 2025 20:07:07 +0000 (0:00:00.749) 0:10:18.927 *********** 2025-05-13 20:08:07.195479 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.195482 | orchestrator | 2025-05-13 20:08:07.195486 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-05-13 20:08:07.195490 | orchestrator | Tuesday 13 May 2025 20:07:07 +0000 (0:00:00.501) 0:10:19.429 *********** 2025-05-13 20:08:07.195494 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:08:07.195497 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-13 20:08:07.195501 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-13 20:08:07.195505 | orchestrator | 2025-05-13 20:08:07.195509 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-05-13 20:08:07.195515 | orchestrator | Tuesday 13 May 2025 20:07:10 +0000 (0:00:02.318) 0:10:21.748 *********** 2025-05-13 20:08:07.195519 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-13 20:08:07.195523 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-13 20:08:07.195527 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.195530 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-13 20:08:07.195534 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-13 20:08:07.195538 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.195541 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-13 20:08:07.195545 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-13 20:08:07.195549 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.195553 | orchestrator | 2025-05-13 20:08:07.195556 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-05-13 20:08:07.195563 | orchestrator | Tuesday 13 May 2025 20:07:11 +0000 (0:00:01.445) 0:10:23.194 *********** 2025-05-13 20:08:07.195567 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.195570 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.195574 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.195578 | orchestrator | 2025-05-13 20:08:07.195581 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-05-13 20:08:07.195585 | orchestrator | Tuesday 13 May 2025 20:07:11 +0000 (0:00:00.302) 0:10:23.496 *********** 2025-05-13 20:08:07.195589 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.195593 | orchestrator | 2025-05-13 20:08:07.195597 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-05-13 20:08:07.195600 | orchestrator | Tuesday 13 May 2025 20:07:12 +0000 (0:00:00.508) 0:10:24.005 *********** 2025-05-13 20:08:07.195604 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-13 20:08:07.195608 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-13 20:08:07.195612 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-13 20:08:07.195616 | orchestrator | 2025-05-13 20:08:07.195620 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-05-13 20:08:07.195623 | orchestrator | Tuesday 13 May 2025 20:07:13 +0000 (0:00:01.317) 0:10:25.323 *********** 2025-05-13 20:08:07.195627 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:08:07.195631 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-05-13 20:08:07.195637 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:08:07.195641 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-05-13 20:08:07.195645 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:08:07.195648 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-05-13 20:08:07.195652 | orchestrator | 2025-05-13 20:08:07.195656 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-05-13 20:08:07.195660 | orchestrator | Tuesday 13 May 2025 20:07:18 +0000 (0:00:04.463) 0:10:29.786 *********** 2025-05-13 20:08:07.195663 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:08:07.195667 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-13 20:08:07.195671 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:08:07.195674 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-13 20:08:07.195678 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:08:07.195682 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-13 20:08:07.195685 | orchestrator | 2025-05-13 20:08:07.195689 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-05-13 20:08:07.195693 | orchestrator | Tuesday 13 May 2025 20:07:20 +0000 (0:00:02.119) 0:10:31.905 *********** 2025-05-13 20:08:07.195697 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-13 20:08:07.195700 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.195704 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-13 20:08:07.195708 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.195717 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-13 20:08:07.195721 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.195724 | orchestrator | 2025-05-13 20:08:07.195728 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-05-13 20:08:07.195732 | orchestrator | Tuesday 13 May 2025 20:07:21 +0000 (0:00:01.244) 0:10:33.149 *********** 2025-05-13 20:08:07.195736 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-05-13 20:08:07.195739 | orchestrator | 2025-05-13 20:08:07.195743 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-05-13 20:08:07.195747 | orchestrator | Tuesday 13 May 2025 20:07:21 +0000 (0:00:00.219) 0:10:33.369 *********** 2025-05-13 20:08:07.195755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-13 20:08:07.195759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-13 20:08:07.195763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-13 20:08:07.195767 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-13 20:08:07.195771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-13 20:08:07.195775 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.195778 | orchestrator | 2025-05-13 20:08:07.195782 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-05-13 20:08:07.195786 | orchestrator | Tuesday 13 May 2025 20:07:22 +0000 (0:00:01.132) 0:10:34.501 *********** 2025-05-13 20:08:07.195790 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-13 20:08:07.195793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-13 20:08:07.195797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-13 20:08:07.195801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-13 20:08:07.195805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-13 20:08:07.195808 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.195812 | orchestrator | 2025-05-13 20:08:07.195816 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-05-13 20:08:07.195820 | orchestrator | Tuesday 13 May 2025 20:07:23 +0000 (0:00:00.586) 0:10:35.087 *********** 2025-05-13 20:08:07.195823 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-13 20:08:07.195827 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-13 20:08:07.195833 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-13 20:08:07.195837 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-13 20:08:07.195841 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-13 20:08:07.195848 | orchestrator | 2025-05-13 20:08:07.195851 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-05-13 20:08:07.195855 | orchestrator | Tuesday 13 May 2025 20:07:53 +0000 (0:00:30.457) 0:11:05.545 *********** 2025-05-13 20:08:07.195859 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.195863 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.195866 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.195870 | orchestrator | 2025-05-13 20:08:07.195874 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-05-13 20:08:07.195877 | orchestrator | Tuesday 13 May 2025 20:07:54 +0000 (0:00:00.329) 0:11:05.875 *********** 2025-05-13 20:08:07.195881 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.195885 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.195889 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.195892 | orchestrator | 2025-05-13 20:08:07.195896 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-05-13 20:08:07.195900 | orchestrator | Tuesday 13 May 2025 20:07:54 +0000 (0:00:00.295) 0:11:06.171 *********** 2025-05-13 20:08:07.195903 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.195907 | orchestrator | 2025-05-13 20:08:07.195911 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-05-13 20:08:07.195915 | orchestrator | Tuesday 13 May 2025 20:07:55 +0000 (0:00:00.822) 0:11:06.993 *********** 2025-05-13 20:08:07.195918 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.195922 | orchestrator | 2025-05-13 20:08:07.195926 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-05-13 20:08:07.195930 | orchestrator | Tuesday 13 May 2025 20:07:55 +0000 (0:00:00.514) 0:11:07.508 *********** 2025-05-13 20:08:07.195933 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.195937 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.195941 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.195944 | orchestrator | 2025-05-13 20:08:07.195948 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-05-13 20:08:07.195952 | orchestrator | Tuesday 13 May 2025 20:07:57 +0000 (0:00:01.233) 0:11:08.741 *********** 2025-05-13 20:08:07.195958 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.195962 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.195966 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.195969 | orchestrator | 2025-05-13 20:08:07.195973 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-05-13 20:08:07.195977 | orchestrator | Tuesday 13 May 2025 20:07:58 +0000 (0:00:01.421) 0:11:10.163 *********** 2025-05-13 20:08:07.195980 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:08:07.195984 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:08:07.195988 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:08:07.195991 | orchestrator | 2025-05-13 20:08:07.195995 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-05-13 20:08:07.195999 | orchestrator | Tuesday 13 May 2025 20:08:00 +0000 (0:00:01.766) 0:11:11.929 *********** 2025-05-13 20:08:07.196003 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-13 20:08:07.196006 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-13 20:08:07.196010 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-13 20:08:07.196014 | orchestrator | 2025-05-13 20:08:07.196018 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-13 20:08:07.196021 | orchestrator | Tuesday 13 May 2025 20:08:02 +0000 (0:00:02.586) 0:11:14.516 *********** 2025-05-13 20:08:07.196025 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.196035 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.196039 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.196042 | orchestrator | 2025-05-13 20:08:07.196046 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-05-13 20:08:07.196050 | orchestrator | Tuesday 13 May 2025 20:08:03 +0000 (0:00:00.340) 0:11:14.857 *********** 2025-05-13 20:08:07.196054 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:08:07.196057 | orchestrator | 2025-05-13 20:08:07.196061 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-05-13 20:08:07.196065 | orchestrator | Tuesday 13 May 2025 20:08:03 +0000 (0:00:00.531) 0:11:15.388 *********** 2025-05-13 20:08:07.196069 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.196072 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.196076 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.196080 | orchestrator | 2025-05-13 20:08:07.196083 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-05-13 20:08:07.196087 | orchestrator | Tuesday 13 May 2025 20:08:04 +0000 (0:00:00.550) 0:11:15.938 *********** 2025-05-13 20:08:07.196091 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.196095 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:08:07.196098 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:08:07.196102 | orchestrator | 2025-05-13 20:08:07.196106 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-05-13 20:08:07.196112 | orchestrator | Tuesday 13 May 2025 20:08:04 +0000 (0:00:00.340) 0:11:16.278 *********** 2025-05-13 20:08:07.196116 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 20:08:07.196119 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 20:08:07.196123 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 20:08:07.196127 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:08:07.196131 | orchestrator | 2025-05-13 20:08:07.196134 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-05-13 20:08:07.196138 | orchestrator | Tuesday 13 May 2025 20:08:05 +0000 (0:00:00.597) 0:11:16.876 *********** 2025-05-13 20:08:07.196142 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:08:07.196146 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:08:07.196149 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:08:07.196153 | orchestrator | 2025-05-13 20:08:07.196157 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:08:07.196161 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-05-13 20:08:07.196165 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-05-13 20:08:07.196169 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-05-13 20:08:07.196172 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-05-13 20:08:07.196176 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-05-13 20:08:07.196180 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-05-13 20:08:07.196184 | orchestrator | 2025-05-13 20:08:07.196187 | orchestrator | 2025-05-13 20:08:07.196191 | orchestrator | 2025-05-13 20:08:07.196195 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:08:07.196199 | orchestrator | Tuesday 13 May 2025 20:08:05 +0000 (0:00:00.242) 0:11:17.119 *********** 2025-05-13 20:08:07.196202 | orchestrator | =============================================================================== 2025-05-13 20:08:07.196210 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 76.30s 2025-05-13 20:08:07.196216 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 42.82s 2025-05-13 20:08:07.196219 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.46s 2025-05-13 20:08:07.196223 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.30s 2025-05-13 20:08:07.196227 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.78s 2025-05-13 20:08:07.196231 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 13.99s 2025-05-13 20:08:07.196234 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.74s 2025-05-13 20:08:07.196238 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.90s 2025-05-13 20:08:07.196242 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.14s 2025-05-13 20:08:07.196245 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.60s 2025-05-13 20:08:07.196278 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.71s 2025-05-13 20:08:07.196282 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.32s 2025-05-13 20:08:07.196285 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.61s 2025-05-13 20:08:07.196289 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.46s 2025-05-13 20:08:07.196293 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.86s 2025-05-13 20:08:07.196297 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.65s 2025-05-13 20:08:07.196300 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.60s 2025-05-13 20:08:07.196304 | orchestrator | ceph-container-common : Generate systemd ceph target file --------------- 3.52s 2025-05-13 20:08:07.196308 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.47s 2025-05-13 20:08:07.196311 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.45s 2025-05-13 20:08:07.196315 | orchestrator | 2025-05-13 20:08:07 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:08:10.218536 | orchestrator | 2025-05-13 20:08:10 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:08:10.218652 | orchestrator | 2025-05-13 20:08:10 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:08:10.219916 | orchestrator | 2025-05-13 20:08:10 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:08:10.219994 | orchestrator | 2025-05-13 20:08:10 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:08:13.274198 | orchestrator | 2025-05-13 20:08:13 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:08:13.274409 | orchestrator | 2025-05-13 20:08:13 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:08:13.278804 | orchestrator | 2025-05-13 20:08:13 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:08:13.278825 | orchestrator | 2025-05-13 20:08:13 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:08:16.328260 | orchestrator | 2025-05-13 20:08:16 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:08:16.328949 | orchestrator | 2025-05-13 20:08:16 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:08:16.330768 | orchestrator | 2025-05-13 20:08:16 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:08:16.330802 | orchestrator | 2025-05-13 20:08:16 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:08:19.387832 | orchestrator | 2025-05-13 20:08:19 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:08:19.388896 | orchestrator | 2025-05-13 20:08:19 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:08:19.395875 | orchestrator | 2025-05-13 20:08:19 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:08:19.395960 | orchestrator | 2025-05-13 20:08:19 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:08:22.451303 | orchestrator | 2025-05-13 20:08:22 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:08:22.451414 | orchestrator | 2025-05-13 20:08:22 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:08:22.451815 | orchestrator | 2025-05-13 20:08:22 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:08:22.451840 | orchestrator | 2025-05-13 20:08:22 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:08:25.497653 | orchestrator | 2025-05-13 20:08:25 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:08:25.498993 | orchestrator | 2025-05-13 20:08:25 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:08:25.501099 | orchestrator | 2025-05-13 20:08:25 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:08:25.501127 | orchestrator | 2025-05-13 20:08:25 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:08:28.554968 | orchestrator | 2025-05-13 20:08:28 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:08:28.558442 | orchestrator | 2025-05-13 20:08:28 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:08:28.562152 | orchestrator | 2025-05-13 20:08:28 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:08:28.562362 | orchestrator | 2025-05-13 20:08:28 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:08:31.614351 | orchestrator | 2025-05-13 20:08:31 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:08:31.614631 | orchestrator | 2025-05-13 20:08:31 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:08:31.615602 | orchestrator | 2025-05-13 20:08:31 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:08:31.615727 | orchestrator | 2025-05-13 20:08:31 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:08:34.673289 | orchestrator | 2025-05-13 20:08:34 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:08:34.674701 | orchestrator | 2025-05-13 20:08:34 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:08:34.680726 | orchestrator | 2025-05-13 20:08:34 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:08:34.680769 | orchestrator | 2025-05-13 20:08:34 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:08:37.733372 | orchestrator | 2025-05-13 20:08:37 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:08:37.733858 | orchestrator | 2025-05-13 20:08:37 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:08:37.735290 | orchestrator | 2025-05-13 20:08:37 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:08:37.735338 | orchestrator | 2025-05-13 20:08:37 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:08:40.783721 | orchestrator | 2025-05-13 20:08:40 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:08:40.785984 | orchestrator | 2025-05-13 20:08:40 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:08:40.787870 | orchestrator | 2025-05-13 20:08:40 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:08:40.788381 | orchestrator | 2025-05-13 20:08:40 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:08:43.846388 | orchestrator | 2025-05-13 20:08:43 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:08:43.848493 | orchestrator | 2025-05-13 20:08:43 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:08:43.849922 | orchestrator | 2025-05-13 20:08:43 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:08:43.850414 | orchestrator | 2025-05-13 20:08:43 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:08:46.901458 | orchestrator | 2025-05-13 20:08:46 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:08:46.903716 | orchestrator | 2025-05-13 20:08:46 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:08:46.906106 | orchestrator | 2025-05-13 20:08:46 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:08:46.906136 | orchestrator | 2025-05-13 20:08:46 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:08:49.957401 | orchestrator | 2025-05-13 20:08:49 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:08:49.959361 | orchestrator | 2025-05-13 20:08:49 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:08:49.961322 | orchestrator | 2025-05-13 20:08:49 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:08:49.961356 | orchestrator | 2025-05-13 20:08:49 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:08:53.014114 | orchestrator | 2025-05-13 20:08:53 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:08:53.016535 | orchestrator | 2025-05-13 20:08:53 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:08:53.019862 | orchestrator | 2025-05-13 20:08:53 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:08:53.019936 | orchestrator | 2025-05-13 20:08:53 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:08:56.061685 | orchestrator | 2025-05-13 20:08:56 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:08:56.063389 | orchestrator | 2025-05-13 20:08:56 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:08:56.064856 | orchestrator | 2025-05-13 20:08:56 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:08:56.064906 | orchestrator | 2025-05-13 20:08:56 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:08:59.110437 | orchestrator | 2025-05-13 20:08:59 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:08:59.112552 | orchestrator | 2025-05-13 20:08:59 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:08:59.114793 | orchestrator | 2025-05-13 20:08:59 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:08:59.114846 | orchestrator | 2025-05-13 20:08:59 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:09:02.171092 | orchestrator | 2025-05-13 20:09:02 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:09:02.171958 | orchestrator | 2025-05-13 20:09:02 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:09:02.173715 | orchestrator | 2025-05-13 20:09:02 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:09:02.173741 | orchestrator | 2025-05-13 20:09:02 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:09:05.220667 | orchestrator | 2025-05-13 20:09:05 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:09:05.222286 | orchestrator | 2025-05-13 20:09:05 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state STARTED 2025-05-13 20:09:05.224284 | orchestrator | 2025-05-13 20:09:05 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:09:05.224469 | orchestrator | 2025-05-13 20:09:05 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:09:08.272083 | orchestrator | 2025-05-13 20:09:08 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:09:08.274793 | orchestrator | 2025-05-13 20:09:08 | INFO  | Task 635814da-fbd0-4f33-8c66-8f4bed802a05 is in state SUCCESS 2025-05-13 20:09:08.276079 | orchestrator | 2025-05-13 20:09:08.276376 | orchestrator | 2025-05-13 20:09:08.276407 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:09:08.276429 | orchestrator | 2025-05-13 20:09:08.276447 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:09:08.276466 | orchestrator | Tuesday 13 May 2025 20:06:06 +0000 (0:00:00.276) 0:00:00.276 *********** 2025-05-13 20:09:08.276486 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:09:08.276505 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:09:08.276524 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:09:08.276542 | orchestrator | 2025-05-13 20:09:08.276562 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:09:08.276582 | orchestrator | Tuesday 13 May 2025 20:06:06 +0000 (0:00:00.295) 0:00:00.572 *********** 2025-05-13 20:09:08.276601 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-05-13 20:09:08.276622 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-05-13 20:09:08.276636 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-05-13 20:09:08.276647 | orchestrator | 2025-05-13 20:09:08.276657 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-05-13 20:09:08.276668 | orchestrator | 2025-05-13 20:09:08.276679 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-13 20:09:08.276689 | orchestrator | Tuesday 13 May 2025 20:06:07 +0000 (0:00:00.491) 0:00:01.063 *********** 2025-05-13 20:09:08.276701 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:09:08.276711 | orchestrator | 2025-05-13 20:09:08.276722 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-05-13 20:09:08.276734 | orchestrator | Tuesday 13 May 2025 20:06:07 +0000 (0:00:00.551) 0:00:01.615 *********** 2025-05-13 20:09:08.276744 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-13 20:09:08.276755 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-13 20:09:08.276766 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-13 20:09:08.276777 | orchestrator | 2025-05-13 20:09:08.276788 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-05-13 20:09:08.276799 | orchestrator | Tuesday 13 May 2025 20:06:08 +0000 (0:00:00.718) 0:00:02.334 *********** 2025-05-13 20:09:08.276814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 20:09:08.276853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 20:09:08.276893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 20:09:08.276909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 20:09:08.276924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 20:09:08.276945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 20:09:08.276958 | orchestrator | 2025-05-13 20:09:08.276973 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-13 20:09:08.276986 | orchestrator | Tuesday 13 May 2025 20:06:10 +0000 (0:00:01.772) 0:00:04.106 *********** 2025-05-13 20:09:08.276999 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:09:08.277012 | orchestrator | 2025-05-13 20:09:08.277024 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-05-13 20:09:08.277037 | orchestrator | Tuesday 13 May 2025 20:06:10 +0000 (0:00:00.525) 0:00:04.631 *********** 2025-05-13 20:09:08.277067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 20:09:08.277083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 20:09:08.277098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 20:09:08.277126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 20:09:08.277166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 20:09:08.277219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 20:09:08.277241 | orchestrator | 2025-05-13 20:09:08.277262 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-05-13 20:09:08.277280 | orchestrator | Tuesday 13 May 2025 20:06:13 +0000 (0:00:02.603) 0:00:07.235 *********** 2025-05-13 20:09:08.277313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-13 20:09:08.277335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-13 20:09:08.277355 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:09:08.277377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-13 20:09:08.277400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-13 20:09:08.277413 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:09:08.277424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-13 20:09:08.277444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-13 20:09:08.277456 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:09:08.277467 | orchestrator | 2025-05-13 20:09:08.277477 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-05-13 20:09:08.277488 | orchestrator | Tuesday 13 May 2025 20:06:15 +0000 (0:00:01.625) 0:00:08.860 *********** 2025-05-13 20:09:08.277504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-13 20:09:08.277524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-13 20:09:08.277544 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:09:08.277556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-13 20:09:08.277568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-13 20:09:08.277580 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:09:08.277591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-13 20:09:08.277616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-13 20:09:08.277789 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:09:08.277806 | orchestrator | 2025-05-13 20:09:08.277817 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-05-13 20:09:08.277828 | orchestrator | Tuesday 13 May 2025 20:06:16 +0000 (0:00:01.007) 0:00:09.868 *********** 2025-05-13 20:09:08.277839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 20:09:08.277851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 20:09:08.277862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 20:09:08.277894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 20:09:08.277908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 20:09:08.277928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 20:09:08.277940 | orchestrator | 2025-05-13 20:09:08.277951 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-05-13 20:09:08.277962 | orchestrator | Tuesday 13 May 2025 20:06:18 +0000 (0:00:02.605) 0:00:12.473 *********** 2025-05-13 20:09:08.277973 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:09:08.277984 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:09:08.277995 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:09:08.278006 | orchestrator | 2025-05-13 20:09:08.278074 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-05-13 20:09:08.278089 | orchestrator | Tuesday 13 May 2025 20:06:22 +0000 (0:00:03.356) 0:00:15.829 *********** 2025-05-13 20:09:08.278100 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:09:08.278111 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:09:08.278121 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:09:08.278132 | orchestrator | 2025-05-13 20:09:08.278143 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-05-13 20:09:08.278153 | orchestrator | Tuesday 13 May 2025 20:06:23 +0000 (0:00:01.710) 0:00:17.539 *********** 2025-05-13 20:09:08.278170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 20:09:08.278224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 20:09:08.278261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-13 20:09:08.278282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 20:09:08.278300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 20:09:08.278327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-13 20:09:08.278347 | orchestrator | 2025-05-13 20:09:08.278358 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-13 20:09:08.278369 | orchestrator | Tuesday 13 May 2025 20:06:26 +0000 (0:00:02.438) 0:00:19.978 *********** 2025-05-13 20:09:08.278380 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:09:08.278391 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:09:08.278404 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:09:08.278417 | orchestrator | 2025-05-13 20:09:08.278430 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-13 20:09:08.278442 | orchestrator | Tuesday 13 May 2025 20:06:26 +0000 (0:00:00.448) 0:00:20.427 *********** 2025-05-13 20:09:08.278454 | orchestrator | 2025-05-13 20:09:08.278467 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-13 20:09:08.278479 | orchestrator | Tuesday 13 May 2025 20:06:26 +0000 (0:00:00.064) 0:00:20.492 *********** 2025-05-13 20:09:08.278491 | orchestrator | 2025-05-13 20:09:08.278505 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-13 20:09:08.278517 | orchestrator | Tuesday 13 May 2025 20:06:26 +0000 (0:00:00.064) 0:00:20.556 *********** 2025-05-13 20:09:08.278529 | orchestrator | 2025-05-13 20:09:08.278540 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-05-13 20:09:08.278553 | orchestrator | Tuesday 13 May 2025 20:06:27 +0000 (0:00:00.275) 0:00:20.831 *********** 2025-05-13 20:09:08.278566 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:09:08.278579 | orchestrator | 2025-05-13 20:09:08.278591 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-05-13 20:09:08.278603 | orchestrator | Tuesday 13 May 2025 20:06:27 +0000 (0:00:00.208) 0:00:21.040 *********** 2025-05-13 20:09:08.278616 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:09:08.278629 | orchestrator | 2025-05-13 20:09:08.278641 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-05-13 20:09:08.278653 | orchestrator | Tuesday 13 May 2025 20:06:27 +0000 (0:00:00.215) 0:00:21.255 *********** 2025-05-13 20:09:08.278667 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:09:08.278681 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:09:08.278694 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:09:08.278706 | orchestrator | 2025-05-13 20:09:08.278718 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-05-13 20:09:08.278731 | orchestrator | Tuesday 13 May 2025 20:07:39 +0000 (0:01:12.336) 0:01:33.592 *********** 2025-05-13 20:09:08.278744 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:09:08.278755 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:09:08.278766 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:09:08.278777 | orchestrator | 2025-05-13 20:09:08.278787 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-13 20:09:08.278798 | orchestrator | Tuesday 13 May 2025 20:08:57 +0000 (0:01:17.217) 0:02:50.810 *********** 2025-05-13 20:09:08.278809 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:09:08.278820 | orchestrator | 2025-05-13 20:09:08.278831 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-05-13 20:09:08.278848 | orchestrator | Tuesday 13 May 2025 20:08:57 +0000 (0:00:00.639) 0:02:51.449 *********** 2025-05-13 20:09:08.278859 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:09:08.278870 | orchestrator | 2025-05-13 20:09:08.278880 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-05-13 20:09:08.278891 | orchestrator | Tuesday 13 May 2025 20:09:00 +0000 (0:00:02.408) 0:02:53.858 *********** 2025-05-13 20:09:08.278901 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:09:08.278912 | orchestrator | 2025-05-13 20:09:08.278923 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-05-13 20:09:08.278933 | orchestrator | Tuesday 13 May 2025 20:09:02 +0000 (0:00:02.169) 0:02:56.027 *********** 2025-05-13 20:09:08.278944 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:09:08.278955 | orchestrator | 2025-05-13 20:09:08.278965 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-05-13 20:09:08.278976 | orchestrator | Tuesday 13 May 2025 20:09:04 +0000 (0:00:02.595) 0:02:58.623 *********** 2025-05-13 20:09:08.278986 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:09:08.278997 | orchestrator | 2025-05-13 20:09:08.279008 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:09:08.279019 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 20:09:08.279032 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-13 20:09:08.279047 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-13 20:09:08.279058 | orchestrator | 2025-05-13 20:09:08.279070 | orchestrator | 2025-05-13 20:09:08.279080 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:09:08.279096 | orchestrator | Tuesday 13 May 2025 20:09:07 +0000 (0:00:02.473) 0:03:01.097 *********** 2025-05-13 20:09:08.279108 | orchestrator | =============================================================================== 2025-05-13 20:09:08.279118 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 77.22s 2025-05-13 20:09:08.279129 | orchestrator | opensearch : Restart opensearch container ------------------------------ 72.34s 2025-05-13 20:09:08.279140 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.36s 2025-05-13 20:09:08.279150 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.61s 2025-05-13 20:09:08.279161 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.60s 2025-05-13 20:09:08.279172 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.60s 2025-05-13 20:09:08.279211 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.47s 2025-05-13 20:09:08.279223 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.44s 2025-05-13 20:09:08.279234 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.41s 2025-05-13 20:09:08.279245 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.17s 2025-05-13 20:09:08.279255 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.77s 2025-05-13 20:09:08.279266 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.71s 2025-05-13 20:09:08.279277 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.63s 2025-05-13 20:09:08.279288 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.01s 2025-05-13 20:09:08.279298 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.72s 2025-05-13 20:09:08.279309 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.64s 2025-05-13 20:09:08.279320 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2025-05-13 20:09:08.279337 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-05-13 20:09:08.279348 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.49s 2025-05-13 20:09:08.279359 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.45s 2025-05-13 20:09:08.279370 | orchestrator | 2025-05-13 20:09:08 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:09:08.279381 | orchestrator | 2025-05-13 20:09:08 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:09:11.323479 | orchestrator | 2025-05-13 20:09:11 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:09:11.325549 | orchestrator | 2025-05-13 20:09:11 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:09:11.325743 | orchestrator | 2025-05-13 20:09:11 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:09:14.378549 | orchestrator | 2025-05-13 20:09:14 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:09:14.380722 | orchestrator | 2025-05-13 20:09:14 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:09:14.381012 | orchestrator | 2025-05-13 20:09:14 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:09:17.431132 | orchestrator | 2025-05-13 20:09:17 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:09:17.433098 | orchestrator | 2025-05-13 20:09:17 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:09:17.433578 | orchestrator | 2025-05-13 20:09:17 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:09:20.494347 | orchestrator | 2025-05-13 20:09:20 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:09:20.495736 | orchestrator | 2025-05-13 20:09:20 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:09:20.496154 | orchestrator | 2025-05-13 20:09:20 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:09:23.546710 | orchestrator | 2025-05-13 20:09:23 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:09:23.549966 | orchestrator | 2025-05-13 20:09:23 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:09:23.550252 | orchestrator | 2025-05-13 20:09:23 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:09:26.606975 | orchestrator | 2025-05-13 20:09:26 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:09:26.608653 | orchestrator | 2025-05-13 20:09:26 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:09:26.608712 | orchestrator | 2025-05-13 20:09:26 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:09:29.663515 | orchestrator | 2025-05-13 20:09:29 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:09:29.665599 | orchestrator | 2025-05-13 20:09:29 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:09:29.665859 | orchestrator | 2025-05-13 20:09:29 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:09:32.725841 | orchestrator | 2025-05-13 20:09:32 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:09:32.728065 | orchestrator | 2025-05-13 20:09:32 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:09:32.728120 | orchestrator | 2025-05-13 20:09:32 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:09:35.792562 | orchestrator | 2025-05-13 20:09:35 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:09:35.795476 | orchestrator | 2025-05-13 20:09:35 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:09:35.795520 | orchestrator | 2025-05-13 20:09:35 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:09:38.855010 | orchestrator | 2025-05-13 20:09:38 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:09:38.855635 | orchestrator | 2025-05-13 20:09:38 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:09:38.855842 | orchestrator | 2025-05-13 20:09:38 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:09:41.915842 | orchestrator | 2025-05-13 20:09:41 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:09:41.918731 | orchestrator | 2025-05-13 20:09:41 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:09:41.918780 | orchestrator | 2025-05-13 20:09:41 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:09:44.972393 | orchestrator | 2025-05-13 20:09:44 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:09:44.976067 | orchestrator | 2025-05-13 20:09:44 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:09:44.976129 | orchestrator | 2025-05-13 20:09:44 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:09:48.030567 | orchestrator | 2025-05-13 20:09:48 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:09:48.034198 | orchestrator | 2025-05-13 20:09:48 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:09:48.034274 | orchestrator | 2025-05-13 20:09:48 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:09:51.084403 | orchestrator | 2025-05-13 20:09:51 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:09:51.086117 | orchestrator | 2025-05-13 20:09:51 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:09:51.086154 | orchestrator | 2025-05-13 20:09:51 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:09:54.137022 | orchestrator | 2025-05-13 20:09:54 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:09:54.138604 | orchestrator | 2025-05-13 20:09:54 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:09:54.138652 | orchestrator | 2025-05-13 20:09:54 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:09:57.186826 | orchestrator | 2025-05-13 20:09:57 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:09:57.188299 | orchestrator | 2025-05-13 20:09:57 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:09:57.188337 | orchestrator | 2025-05-13 20:09:57 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:10:00.241107 | orchestrator | 2025-05-13 20:10:00 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:10:00.245366 | orchestrator | 2025-05-13 20:10:00 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:10:00.245456 | orchestrator | 2025-05-13 20:10:00 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:10:03.304357 | orchestrator | 2025-05-13 20:10:03 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:10:03.305691 | orchestrator | 2025-05-13 20:10:03 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:10:03.306889 | orchestrator | 2025-05-13 20:10:03 | INFO  | Task 10f052ae-6ea4-4ddf-80d9-355d23f64cad is in state STARTED 2025-05-13 20:10:03.306965 | orchestrator | 2025-05-13 20:10:03 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:10:06.364689 | orchestrator | 2025-05-13 20:10:06 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:10:06.366219 | orchestrator | 2025-05-13 20:10:06 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:10:06.368358 | orchestrator | 2025-05-13 20:10:06 | INFO  | Task 10f052ae-6ea4-4ddf-80d9-355d23f64cad is in state STARTED 2025-05-13 20:10:06.368386 | orchestrator | 2025-05-13 20:10:06 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:10:09.433775 | orchestrator | 2025-05-13 20:10:09 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:10:09.435398 | orchestrator | 2025-05-13 20:10:09 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:10:09.437187 | orchestrator | 2025-05-13 20:10:09 | INFO  | Task 10f052ae-6ea4-4ddf-80d9-355d23f64cad is in state STARTED 2025-05-13 20:10:09.437251 | orchestrator | 2025-05-13 20:10:09 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:10:12.490894 | orchestrator | 2025-05-13 20:10:12 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state STARTED 2025-05-13 20:10:12.492745 | orchestrator | 2025-05-13 20:10:12 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:10:12.493303 | orchestrator | 2025-05-13 20:10:12 | INFO  | Task 10f052ae-6ea4-4ddf-80d9-355d23f64cad is in state STARTED 2025-05-13 20:10:12.495494 | orchestrator | 2025-05-13 20:10:12 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:10:15.546331 | orchestrator | 2025-05-13 20:10:15 | INFO  | Task c47f64d6-f890-45e7-9052-6bae5131d61b is in state SUCCESS 2025-05-13 20:10:15.546453 | orchestrator | 2025-05-13 20:10:15.548367 | orchestrator | 2025-05-13 20:10:15.548406 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-05-13 20:10:15.548419 | orchestrator | 2025-05-13 20:10:15.548430 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-13 20:10:15.548441 | orchestrator | Tuesday 13 May 2025 20:06:06 +0000 (0:00:00.104) 0:00:00.105 *********** 2025-05-13 20:10:15.548453 | orchestrator | ok: [localhost] => { 2025-05-13 20:10:15.548465 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-05-13 20:10:15.548476 | orchestrator | } 2025-05-13 20:10:15.548764 | orchestrator | 2025-05-13 20:10:15.548785 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-05-13 20:10:15.548796 | orchestrator | Tuesday 13 May 2025 20:06:06 +0000 (0:00:00.038) 0:00:00.143 *********** 2025-05-13 20:10:15.548808 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-05-13 20:10:15.548821 | orchestrator | ...ignoring 2025-05-13 20:10:15.548832 | orchestrator | 2025-05-13 20:10:15.548843 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-05-13 20:10:15.548854 | orchestrator | Tuesday 13 May 2025 20:06:09 +0000 (0:00:03.000) 0:00:03.143 *********** 2025-05-13 20:10:15.548865 | orchestrator | skipping: [localhost] 2025-05-13 20:10:15.548876 | orchestrator | 2025-05-13 20:10:15.548887 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-05-13 20:10:15.548898 | orchestrator | Tuesday 13 May 2025 20:06:09 +0000 (0:00:00.058) 0:00:03.202 *********** 2025-05-13 20:10:15.548909 | orchestrator | ok: [localhost] 2025-05-13 20:10:15.548920 | orchestrator | 2025-05-13 20:10:15.548931 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:10:15.548942 | orchestrator | 2025-05-13 20:10:15.548953 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:10:15.548988 | orchestrator | Tuesday 13 May 2025 20:06:09 +0000 (0:00:00.223) 0:00:03.426 *********** 2025-05-13 20:10:15.549000 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:10:15.549011 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:10:15.549022 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:10:15.549032 | orchestrator | 2025-05-13 20:10:15.549043 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:10:15.549053 | orchestrator | Tuesday 13 May 2025 20:06:10 +0000 (0:00:00.329) 0:00:03.756 *********** 2025-05-13 20:10:15.549064 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-13 20:10:15.549075 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-13 20:10:15.549086 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-13 20:10:15.549097 | orchestrator | 2025-05-13 20:10:15.549107 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-13 20:10:15.549118 | orchestrator | 2025-05-13 20:10:15.549129 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-13 20:10:15.549139 | orchestrator | Tuesday 13 May 2025 20:06:10 +0000 (0:00:00.598) 0:00:04.354 *********** 2025-05-13 20:10:15.549150 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-13 20:10:15.549161 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-13 20:10:15.549199 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-13 20:10:15.549210 | orchestrator | 2025-05-13 20:10:15.549220 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-13 20:10:15.549231 | orchestrator | Tuesday 13 May 2025 20:06:11 +0000 (0:00:00.366) 0:00:04.721 *********** 2025-05-13 20:10:15.549256 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:10:15.549268 | orchestrator | 2025-05-13 20:10:15.549279 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-05-13 20:10:15.549290 | orchestrator | Tuesday 13 May 2025 20:06:11 +0000 (0:00:00.580) 0:00:05.301 *********** 2025-05-13 20:10:15.549321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-13 20:10:15.549339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-13 20:10:15.549368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-13 20:10:15.549381 | orchestrator | 2025-05-13 20:10:15.549402 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-05-13 20:10:15.549413 | orchestrator | Tuesday 13 May 2025 20:06:15 +0000 (0:00:03.574) 0:00:08.875 *********** 2025-05-13 20:10:15.549424 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:10:15.549436 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:10:15.549447 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:10:15.549458 | orchestrator | 2025-05-13 20:10:15.549469 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-05-13 20:10:15.549487 | orchestrator | Tuesday 13 May 2025 20:06:16 +0000 (0:00:00.947) 0:00:09.823 *********** 2025-05-13 20:10:15.549498 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:10:15.549509 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:10:15.549520 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:10:15.549530 | orchestrator | 2025-05-13 20:10:15.549541 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-05-13 20:10:15.549552 | orchestrator | Tuesday 13 May 2025 20:06:17 +0000 (0:00:01.657) 0:00:11.481 *********** 2025-05-13 20:10:15.549570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-13 20:10:15.549590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-13 20:10:15.549611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-13 20:10:15.549623 | orchestrator | 2025-05-13 20:10:15.549634 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-05-13 20:10:15.549645 | orchestrator | Tuesday 13 May 2025 20:06:21 +0000 (0:00:04.140) 0:00:15.621 *********** 2025-05-13 20:10:15.549656 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:10:15.549667 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:10:15.549678 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:10:15.549688 | orchestrator | 2025-05-13 20:10:15.549699 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-05-13 20:10:15.549709 | orchestrator | Tuesday 13 May 2025 20:06:23 +0000 (0:00:01.138) 0:00:16.760 *********** 2025-05-13 20:10:15.549725 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:10:15.549736 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:10:15.549746 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:10:15.549757 | orchestrator | 2025-05-13 20:10:15.549767 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-13 20:10:15.549778 | orchestrator | Tuesday 13 May 2025 20:06:27 +0000 (0:00:04.460) 0:00:21.220 *********** 2025-05-13 20:10:15.549789 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:10:15.549799 | orchestrator | 2025-05-13 20:10:15.549810 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-05-13 20:10:15.549820 | orchestrator | Tuesday 13 May 2025 20:06:28 +0000 (0:00:00.767) 0:00:21.988 *********** 2025-05-13 20:10:15.549840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 20:10:15.549859 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:10:15.549876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 20:10:15.549888 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:10:15.549907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 20:10:15.549926 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:10:15.549936 | orchestrator | 2025-05-13 20:10:15.549947 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-05-13 20:10:15.549958 | orchestrator | Tuesday 13 May 2025 20:06:31 +0000 (0:00:02.873) 0:00:24.862 *********** 2025-05-13 20:10:15.549970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 20:10:15.549981 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:10:15.550003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 20:10:15.550067 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:10:15.550082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 20:10:15.550094 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:10:15.550105 | orchestrator | 2025-05-13 20:10:15.550116 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-05-13 20:10:15.550126 | orchestrator | Tuesday 13 May 2025 20:06:33 +0000 (0:00:01.963) 0:00:26.825 *********** 2025-05-13 20:10:15.550143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 20:10:15.550188 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:10:15.550211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 20:10:15.550223 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:10:15.550239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-13 20:10:15.550258 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:10:15.550268 | orchestrator | 2025-05-13 20:10:15.550279 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-05-13 20:10:15.550290 | orchestrator | Tuesday 13 May 2025 20:06:35 +0000 (0:00:02.747) 0:00:29.573 *********** 2025-05-13 20:10:15.550407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-13 20:10:15.550429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-13 20:10:15.550458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-13 20:10:15.550471 | orchestrator | 2025-05-13 20:10:15.550483 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-05-13 20:10:15.550494 | orchestrator | Tuesday 13 May 2025 20:06:39 +0000 (0:00:03.867) 0:00:33.440 *********** 2025-05-13 20:10:15.550505 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:10:15.550516 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:10:15.550527 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:10:15.550538 | orchestrator | 2025-05-13 20:10:15.550549 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-05-13 20:10:15.550561 | orchestrator | Tuesday 13 May 2025 20:06:40 +0000 (0:00:01.142) 0:00:34.582 *********** 2025-05-13 20:10:15.550572 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:10:15.550584 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:10:15.550595 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:10:15.550606 | orchestrator | 2025-05-13 20:10:15.550617 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-05-13 20:10:15.550628 | orchestrator | Tuesday 13 May 2025 20:06:41 +0000 (0:00:00.481) 0:00:35.064 *********** 2025-05-13 20:10:15.550639 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:10:15.550651 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:10:15.550662 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:10:15.550673 | orchestrator | 2025-05-13 20:10:15.550684 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-05-13 20:10:15.550695 | orchestrator | Tuesday 13 May 2025 20:06:41 +0000 (0:00:00.397) 0:00:35.461 *********** 2025-05-13 20:10:15.550707 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-05-13 20:10:15.550726 | orchestrator | ...ignoring 2025-05-13 20:10:15.550738 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-05-13 20:10:15.550749 | orchestrator | ...ignoring 2025-05-13 20:10:15.550766 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-05-13 20:10:15.550777 | orchestrator | ...ignoring 2025-05-13 20:10:15.550789 | orchestrator | 2025-05-13 20:10:15.550800 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-05-13 20:10:15.550811 | orchestrator | Tuesday 13 May 2025 20:06:52 +0000 (0:00:10.945) 0:00:46.407 *********** 2025-05-13 20:10:15.550822 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:10:15.550840 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:10:15.550858 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:10:15.550876 | orchestrator | 2025-05-13 20:10:15.550894 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-05-13 20:10:15.550912 | orchestrator | Tuesday 13 May 2025 20:06:53 +0000 (0:00:00.649) 0:00:47.057 *********** 2025-05-13 20:10:15.550931 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:10:15.550949 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:10:15.550967 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:10:15.551103 | orchestrator | 2025-05-13 20:10:15.551118 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-05-13 20:10:15.551128 | orchestrator | Tuesday 13 May 2025 20:06:53 +0000 (0:00:00.401) 0:00:47.458 *********** 2025-05-13 20:10:15.551139 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:10:15.551150 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:10:15.551160 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:10:15.551237 | orchestrator | 2025-05-13 20:10:15.551248 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-05-13 20:10:15.551259 | orchestrator | Tuesday 13 May 2025 20:06:54 +0000 (0:00:00.436) 0:00:47.894 *********** 2025-05-13 20:10:15.551269 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:10:15.551280 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:10:15.551289 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:10:15.551299 | orchestrator | 2025-05-13 20:10:15.551308 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-05-13 20:10:15.551318 | orchestrator | Tuesday 13 May 2025 20:06:54 +0000 (0:00:00.431) 0:00:48.326 *********** 2025-05-13 20:10:15.551327 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:10:15.551337 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:10:15.551346 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:10:15.551356 | orchestrator | 2025-05-13 20:10:15.551365 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-05-13 20:10:15.551375 | orchestrator | Tuesday 13 May 2025 20:06:55 +0000 (0:00:00.653) 0:00:48.980 *********** 2025-05-13 20:10:15.551394 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:10:15.551404 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:10:15.551413 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:10:15.551422 | orchestrator | 2025-05-13 20:10:15.551432 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-13 20:10:15.551442 | orchestrator | Tuesday 13 May 2025 20:06:55 +0000 (0:00:00.408) 0:00:49.388 *********** 2025-05-13 20:10:15.551451 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:10:15.551461 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:10:15.551471 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-05-13 20:10:15.551480 | orchestrator | 2025-05-13 20:10:15.551490 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-05-13 20:10:15.551499 | orchestrator | Tuesday 13 May 2025 20:06:56 +0000 (0:00:00.368) 0:00:49.757 *********** 2025-05-13 20:10:15.551519 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:10:15.551528 | orchestrator | 2025-05-13 20:10:15.551537 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-05-13 20:10:15.551547 | orchestrator | Tuesday 13 May 2025 20:07:16 +0000 (0:00:20.134) 0:01:09.891 *********** 2025-05-13 20:10:15.551557 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:10:15.551566 | orchestrator | 2025-05-13 20:10:15.551576 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-13 20:10:15.551585 | orchestrator | Tuesday 13 May 2025 20:07:16 +0000 (0:00:00.116) 0:01:10.007 *********** 2025-05-13 20:10:15.551595 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:10:15.551604 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:10:15.551614 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:10:15.551623 | orchestrator | 2025-05-13 20:10:15.551633 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-05-13 20:10:15.551642 | orchestrator | Tuesday 13 May 2025 20:07:17 +0000 (0:00:01.009) 0:01:11.017 *********** 2025-05-13 20:10:15.551652 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:10:15.551661 | orchestrator | 2025-05-13 20:10:15.551671 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-05-13 20:10:15.551680 | orchestrator | Tuesday 13 May 2025 20:07:25 +0000 (0:00:07.778) 0:01:18.795 *********** 2025-05-13 20:10:15.551690 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:10:15.551699 | orchestrator | 2025-05-13 20:10:15.551709 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-05-13 20:10:15.551719 | orchestrator | Tuesday 13 May 2025 20:07:35 +0000 (0:00:10.615) 0:01:29.411 *********** 2025-05-13 20:10:15.551728 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:10:15.551738 | orchestrator | 2025-05-13 20:10:15.551747 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-05-13 20:10:15.551757 | orchestrator | Tuesday 13 May 2025 20:07:38 +0000 (0:00:02.542) 0:01:31.953 *********** 2025-05-13 20:10:15.551766 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:10:15.551776 | orchestrator | 2025-05-13 20:10:15.551785 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-05-13 20:10:15.551795 | orchestrator | Tuesday 13 May 2025 20:07:38 +0000 (0:00:00.132) 0:01:32.086 *********** 2025-05-13 20:10:15.551804 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:10:15.551814 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:10:15.551823 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:10:15.551833 | orchestrator | 2025-05-13 20:10:15.551842 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-05-13 20:10:15.551852 | orchestrator | Tuesday 13 May 2025 20:07:38 +0000 (0:00:00.520) 0:01:32.607 *********** 2025-05-13 20:10:15.551861 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:10:15.551871 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-13 20:10:15.551887 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:10:15.551897 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:10:15.551906 | orchestrator | 2025-05-13 20:10:15.551915 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-13 20:10:15.551925 | orchestrator | skipping: no hosts matched 2025-05-13 20:10:15.551934 | orchestrator | 2025-05-13 20:10:15.551944 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-13 20:10:15.551953 | orchestrator | 2025-05-13 20:10:15.551963 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-13 20:10:15.551972 | orchestrator | Tuesday 13 May 2025 20:07:39 +0000 (0:00:00.344) 0:01:32.951 *********** 2025-05-13 20:10:15.551982 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:10:15.551992 | orchestrator | 2025-05-13 20:10:15.552001 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-13 20:10:15.552011 | orchestrator | Tuesday 13 May 2025 20:07:58 +0000 (0:00:18.909) 0:01:51.861 *********** 2025-05-13 20:10:15.552020 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:10:15.552036 | orchestrator | 2025-05-13 20:10:15.552046 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-13 20:10:15.552055 | orchestrator | Tuesday 13 May 2025 20:08:32 +0000 (0:00:34.615) 0:02:26.476 *********** 2025-05-13 20:10:15.552065 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:10:15.552074 | orchestrator | 2025-05-13 20:10:15.552083 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-13 20:10:15.552093 | orchestrator | 2025-05-13 20:10:15.552103 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-13 20:10:15.552112 | orchestrator | Tuesday 13 May 2025 20:08:35 +0000 (0:00:02.397) 0:02:28.874 *********** 2025-05-13 20:10:15.552122 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:10:15.552131 | orchestrator | 2025-05-13 20:10:15.552141 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-13 20:10:15.552150 | orchestrator | Tuesday 13 May 2025 20:09:00 +0000 (0:00:25.108) 0:02:53.983 *********** 2025-05-13 20:10:15.552160 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Wait for MariaDB service port liveness (10 retries left). 2025-05-13 20:10:15.552185 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:10:15.552195 | orchestrator | 2025-05-13 20:10:15.552204 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-13 20:10:15.552213 | orchestrator | Tuesday 13 May 2025 20:09:30 +0000 (0:00:30.252) 0:03:24.236 *********** 2025-05-13 20:10:15.552228 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:10:15.552238 | orchestrator | 2025-05-13 20:10:15.552248 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-13 20:10:15.552257 | orchestrator | 2025-05-13 20:10:15.552267 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-13 20:10:15.552276 | orchestrator | Tuesday 13 May 2025 20:09:33 +0000 (0:00:02.704) 0:03:26.941 *********** 2025-05-13 20:10:15.552285 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:10:15.552295 | orchestrator | 2025-05-13 20:10:15.552304 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-13 20:10:15.552314 | orchestrator | Tuesday 13 May 2025 20:09:45 +0000 (0:00:12.195) 0:03:39.136 *********** 2025-05-13 20:10:15.552323 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for MariaDB service port liveness (10 retries left). 2025-05-13 20:10:15.552333 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:10:15.552343 | orchestrator | 2025-05-13 20:10:15.552352 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-13 20:10:15.552361 | orchestrator | Tuesday 13 May 2025 20:09:59 +0000 (0:00:14.289) 0:03:53.425 *********** 2025-05-13 20:10:15.552371 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:10:15.552380 | orchestrator | 2025-05-13 20:10:15.552392 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-13 20:10:15.552407 | orchestrator | 2025-05-13 20:10:15.552423 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-13 20:10:15.552435 | orchestrator | Tuesday 13 May 2025 20:10:02 +0000 (0:00:02.509) 0:03:55.935 *********** 2025-05-13 20:10:15.552445 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:10:15.552454 | orchestrator | 2025-05-13 20:10:15.552464 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-05-13 20:10:15.552473 | orchestrator | Tuesday 13 May 2025 20:10:02 +0000 (0:00:00.523) 0:03:56.458 *********** 2025-05-13 20:10:15.552483 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:10:15.552492 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:10:15.552502 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:10:15.552511 | orchestrator | 2025-05-13 20:10:15.552521 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-05-13 20:10:15.552530 | orchestrator | Tuesday 13 May 2025 20:10:05 +0000 (0:00:02.641) 0:03:59.100 *********** 2025-05-13 20:10:15.552540 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:10:15.552549 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:10:15.552559 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:10:15.552583 | orchestrator | 2025-05-13 20:10:15.552604 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-05-13 20:10:15.552628 | orchestrator | Tuesday 13 May 2025 20:10:07 +0000 (0:00:02.114) 0:04:01.214 *********** 2025-05-13 20:10:15.552643 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:10:15.552658 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:10:15.552674 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:10:15.552689 | orchestrator | 2025-05-13 20:10:15.552704 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-05-13 20:10:15.552720 | orchestrator | Tuesday 13 May 2025 20:10:09 +0000 (0:00:02.130) 0:04:03.345 *********** 2025-05-13 20:10:15.552735 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:10:15.552752 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:10:15.552768 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:10:15.552783 | orchestrator | 2025-05-13 20:10:15.552799 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-05-13 20:10:15.552809 | orchestrator | Tuesday 13 May 2025 20:10:11 +0000 (0:00:02.022) 0:04:05.367 *********** 2025-05-13 20:10:15.552818 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:10:15.552827 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:10:15.552848 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:10:15.552876 | orchestrator | 2025-05-13 20:10:15.552893 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-13 20:10:15.552909 | orchestrator | Tuesday 13 May 2025 20:10:14 +0000 (0:00:03.223) 0:04:08.591 *********** 2025-05-13 20:10:15.552924 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:10:15.552940 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:10:15.552956 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:10:15.552971 | orchestrator | 2025-05-13 20:10:15.552985 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:10:15.553001 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-13 20:10:15.553017 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-05-13 20:10:15.553034 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-05-13 20:10:15.553049 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-05-13 20:10:15.553066 | orchestrator | 2025-05-13 20:10:15.553080 | orchestrator | 2025-05-13 20:10:15.553090 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:10:15.553100 | orchestrator | Tuesday 13 May 2025 20:10:15 +0000 (0:00:00.217) 0:04:08.809 *********** 2025-05-13 20:10:15.553110 | orchestrator | =============================================================================== 2025-05-13 20:10:15.553119 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 64.87s 2025-05-13 20:10:15.553128 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 44.02s 2025-05-13 20:10:15.553138 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 20.13s 2025-05-13 20:10:15.553147 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 14.29s 2025-05-13 20:10:15.553266 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.20s 2025-05-13 20:10:15.553283 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.95s 2025-05-13 20:10:15.553299 | orchestrator | mariadb : Wait for first MariaDB service port liveness ----------------- 10.62s 2025-05-13 20:10:15.553315 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.78s 2025-05-13 20:10:15.553331 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.10s 2025-05-13 20:10:15.553361 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.46s 2025-05-13 20:10:15.553377 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.14s 2025-05-13 20:10:15.553390 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.87s 2025-05-13 20:10:15.553406 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.57s 2025-05-13 20:10:15.553421 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.22s 2025-05-13 20:10:15.553435 | orchestrator | Check MariaDB service --------------------------------------------------- 3.00s 2025-05-13 20:10:15.553449 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.87s 2025-05-13 20:10:15.553464 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.75s 2025-05-13 20:10:15.553480 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.64s 2025-05-13 20:10:15.553496 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.54s 2025-05-13 20:10:15.553512 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.51s 2025-05-13 20:10:15.553528 | orchestrator | 2025-05-13 20:10:15 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state STARTED 2025-05-13 20:10:15.553543 | orchestrator | 2025-05-13 20:10:15 | INFO  | Task 10f052ae-6ea4-4ddf-80d9-355d23f64cad is in state STARTED 2025-05-13 20:10:15.553553 | orchestrator | 2025-05-13 20:10:15 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:10:18.599531 | orchestrator | 2025-05-13 20:10:18 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:10:18.599783 | orchestrator | 2025-05-13 20:10:18 | INFO  | Task 8e21ecf0-9f0f-444b-821b-c0654021a7b7 is in state STARTED 2025-05-13 20:10:18.602302 | orchestrator | 2025-05-13 20:10:18 | INFO  | Task 53af6222-0926-4ae8-aa70-cdfb706ec256 is in state SUCCESS 2025-05-13 20:10:18.603697 | orchestrator | 2025-05-13 20:10:18.603748 | orchestrator | 2025-05-13 20:10:18.603770 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-05-13 20:10:18.603791 | orchestrator | 2025-05-13 20:10:18.604321 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-05-13 20:10:18.604355 | orchestrator | Tuesday 13 May 2025 20:08:11 +0000 (0:00:00.733) 0:00:00.733 *********** 2025-05-13 20:10:18.604375 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:10:18.604396 | orchestrator | 2025-05-13 20:10:18.604414 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-05-13 20:10:18.604454 | orchestrator | Tuesday 13 May 2025 20:08:11 +0000 (0:00:00.650) 0:00:01.383 *********** 2025-05-13 20:10:18.604475 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:10:18.604495 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:10:18.604514 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:10:18.604931 | orchestrator | 2025-05-13 20:10:18.604953 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-05-13 20:10:18.604964 | orchestrator | Tuesday 13 May 2025 20:08:12 +0000 (0:00:00.617) 0:00:02.001 *********** 2025-05-13 20:10:18.604975 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:10:18.604986 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:10:18.604996 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:10:18.605007 | orchestrator | 2025-05-13 20:10:18.605018 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-05-13 20:10:18.605028 | orchestrator | Tuesday 13 May 2025 20:08:12 +0000 (0:00:00.280) 0:00:02.282 *********** 2025-05-13 20:10:18.605039 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:10:18.605049 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:10:18.605060 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:10:18.605071 | orchestrator | 2025-05-13 20:10:18.605081 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-05-13 20:10:18.605117 | orchestrator | Tuesday 13 May 2025 20:08:13 +0000 (0:00:00.738) 0:00:03.021 *********** 2025-05-13 20:10:18.605128 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:10:18.605139 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:10:18.605150 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:10:18.605161 | orchestrator | 2025-05-13 20:10:18.605241 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-05-13 20:10:18.605252 | orchestrator | Tuesday 13 May 2025 20:08:13 +0000 (0:00:00.281) 0:00:03.303 *********** 2025-05-13 20:10:18.605263 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:10:18.605274 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:10:18.605284 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:10:18.605294 | orchestrator | 2025-05-13 20:10:18.605305 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-05-13 20:10:18.605316 | orchestrator | Tuesday 13 May 2025 20:08:14 +0000 (0:00:00.268) 0:00:03.571 *********** 2025-05-13 20:10:18.605326 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:10:18.605337 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:10:18.605347 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:10:18.605358 | orchestrator | 2025-05-13 20:10:18.605368 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-05-13 20:10:18.605379 | orchestrator | Tuesday 13 May 2025 20:08:14 +0000 (0:00:00.301) 0:00:03.873 *********** 2025-05-13 20:10:18.605389 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.605402 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:10:18.605412 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:10:18.605423 | orchestrator | 2025-05-13 20:10:18.605434 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-05-13 20:10:18.605444 | orchestrator | Tuesday 13 May 2025 20:08:14 +0000 (0:00:00.480) 0:00:04.353 *********** 2025-05-13 20:10:18.605454 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:10:18.605465 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:10:18.605475 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:10:18.605486 | orchestrator | 2025-05-13 20:10:18.605496 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-13 20:10:18.605507 | orchestrator | Tuesday 13 May 2025 20:08:15 +0000 (0:00:00.280) 0:00:04.634 *********** 2025-05-13 20:10:18.605517 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-13 20:10:18.605528 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-13 20:10:18.605538 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-13 20:10:18.605549 | orchestrator | 2025-05-13 20:10:18.605560 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-05-13 20:10:18.605571 | orchestrator | Tuesday 13 May 2025 20:08:15 +0000 (0:00:00.671) 0:00:05.306 *********** 2025-05-13 20:10:18.605583 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:10:18.605595 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:10:18.605606 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:10:18.605618 | orchestrator | 2025-05-13 20:10:18.605631 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-05-13 20:10:18.605643 | orchestrator | Tuesday 13 May 2025 20:08:16 +0000 (0:00:00.416) 0:00:05.722 *********** 2025-05-13 20:10:18.605655 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-13 20:10:18.605667 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-13 20:10:18.605678 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-13 20:10:18.605690 | orchestrator | 2025-05-13 20:10:18.605702 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-05-13 20:10:18.605714 | orchestrator | Tuesday 13 May 2025 20:08:18 +0000 (0:00:02.025) 0:00:07.747 *********** 2025-05-13 20:10:18.605725 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-13 20:10:18.605738 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-13 20:10:18.605759 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-13 20:10:18.605771 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.605783 | orchestrator | 2025-05-13 20:10:18.605795 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-05-13 20:10:18.605852 | orchestrator | Tuesday 13 May 2025 20:08:18 +0000 (0:00:00.412) 0:00:08.160 *********** 2025-05-13 20:10:18.605868 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.605892 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.605903 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.605914 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.605925 | orchestrator | 2025-05-13 20:10:18.605936 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-05-13 20:10:18.605947 | orchestrator | Tuesday 13 May 2025 20:08:19 +0000 (0:00:00.826) 0:00:08.987 *********** 2025-05-13 20:10:18.605960 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.605976 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.605987 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.605998 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.606009 | orchestrator | 2025-05-13 20:10:18.606103 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-05-13 20:10:18.606116 | orchestrator | Tuesday 13 May 2025 20:08:19 +0000 (0:00:00.163) 0:00:09.150 *********** 2025-05-13 20:10:18.606130 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '5d274135c3ee', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-13 20:08:16.833344', 'end': '2025-05-13 20:08:16.884428', 'delta': '0:00:00.051084', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5d274135c3ee'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-13 20:10:18.606145 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b11b6c582e68', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-13 20:08:17.543166', 'end': '2025-05-13 20:08:17.582646', 'delta': '0:00:00.039480', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b11b6c582e68'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-13 20:10:18.606234 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd4982db07972', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-13 20:08:18.064788', 'end': '2025-05-13 20:08:18.106229', 'delta': '0:00:00.041441', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d4982db07972'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-13 20:10:18.606250 | orchestrator | 2025-05-13 20:10:18.606261 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-05-13 20:10:18.606272 | orchestrator | Tuesday 13 May 2025 20:08:20 +0000 (0:00:00.369) 0:00:09.520 *********** 2025-05-13 20:10:18.606282 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:10:18.606293 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:10:18.606304 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:10:18.606314 | orchestrator | 2025-05-13 20:10:18.606325 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-05-13 20:10:18.606335 | orchestrator | Tuesday 13 May 2025 20:08:20 +0000 (0:00:00.436) 0:00:09.957 *********** 2025-05-13 20:10:18.606346 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-13 20:10:18.606356 | orchestrator | 2025-05-13 20:10:18.606366 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-05-13 20:10:18.606377 | orchestrator | Tuesday 13 May 2025 20:08:22 +0000 (0:00:01.678) 0:00:11.635 *********** 2025-05-13 20:10:18.606387 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.606398 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:10:18.606408 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:10:18.606419 | orchestrator | 2025-05-13 20:10:18.606429 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-05-13 20:10:18.606440 | orchestrator | Tuesday 13 May 2025 20:08:22 +0000 (0:00:00.289) 0:00:11.925 *********** 2025-05-13 20:10:18.606450 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.606461 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:10:18.606471 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:10:18.606481 | orchestrator | 2025-05-13 20:10:18.606492 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-13 20:10:18.606502 | orchestrator | Tuesday 13 May 2025 20:08:22 +0000 (0:00:00.386) 0:00:12.312 *********** 2025-05-13 20:10:18.606513 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.606523 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:10:18.606533 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:10:18.606544 | orchestrator | 2025-05-13 20:10:18.606554 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-05-13 20:10:18.606565 | orchestrator | Tuesday 13 May 2025 20:08:23 +0000 (0:00:00.450) 0:00:12.763 *********** 2025-05-13 20:10:18.606576 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:10:18.606586 | orchestrator | 2025-05-13 20:10:18.606597 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-05-13 20:10:18.606615 | orchestrator | Tuesday 13 May 2025 20:08:23 +0000 (0:00:00.134) 0:00:12.898 *********** 2025-05-13 20:10:18.606626 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.606636 | orchestrator | 2025-05-13 20:10:18.606647 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-13 20:10:18.606657 | orchestrator | Tuesday 13 May 2025 20:08:23 +0000 (0:00:00.238) 0:00:13.136 *********** 2025-05-13 20:10:18.606667 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.606678 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:10:18.606688 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:10:18.606699 | orchestrator | 2025-05-13 20:10:18.606709 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-05-13 20:10:18.606719 | orchestrator | Tuesday 13 May 2025 20:08:23 +0000 (0:00:00.285) 0:00:13.422 *********** 2025-05-13 20:10:18.606730 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.606740 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:10:18.606751 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:10:18.606761 | orchestrator | 2025-05-13 20:10:18.606772 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-05-13 20:10:18.606782 | orchestrator | Tuesday 13 May 2025 20:08:24 +0000 (0:00:00.302) 0:00:13.724 *********** 2025-05-13 20:10:18.606793 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.606803 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:10:18.606814 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:10:18.606824 | orchestrator | 2025-05-13 20:10:18.606835 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-05-13 20:10:18.606845 | orchestrator | Tuesday 13 May 2025 20:08:24 +0000 (0:00:00.480) 0:00:14.205 *********** 2025-05-13 20:10:18.606855 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.606866 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:10:18.606876 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:10:18.606886 | orchestrator | 2025-05-13 20:10:18.606897 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-05-13 20:10:18.606908 | orchestrator | Tuesday 13 May 2025 20:08:24 +0000 (0:00:00.299) 0:00:14.505 *********** 2025-05-13 20:10:18.606918 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.606928 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:10:18.606939 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:10:18.606950 | orchestrator | 2025-05-13 20:10:18.606960 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-05-13 20:10:18.606971 | orchestrator | Tuesday 13 May 2025 20:08:25 +0000 (0:00:00.303) 0:00:14.808 *********** 2025-05-13 20:10:18.606981 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.606991 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:10:18.607002 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:10:18.607013 | orchestrator | 2025-05-13 20:10:18.607023 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-13 20:10:18.607063 | orchestrator | Tuesday 13 May 2025 20:08:25 +0000 (0:00:00.313) 0:00:15.122 *********** 2025-05-13 20:10:18.607076 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.607087 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:10:18.607097 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:10:18.607108 | orchestrator | 2025-05-13 20:10:18.607119 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-05-13 20:10:18.607189 | orchestrator | Tuesday 13 May 2025 20:08:26 +0000 (0:00:00.473) 0:00:15.596 *********** 2025-05-13 20:10:18.607220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eb14b8c1--d757--5b78--a398--3e433d34ee3e-osd--block--eb14b8c1--d757--5b78--a398--3e433d34ee3e', 'dm-uuid-LVM-rUzZXZKL8QvWDWEmhCrsMJVItcd4niAXg5NokKKGy3QkHSq9S0nIJSi2Q21T5NwR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--55d6de5b--857a--5090--90bd--6b26b006e6c2-osd--block--55d6de5b--857a--5090--90bd--6b26b006e6c2', 'dm-uuid-LVM-vBJkM2Ms9xoHjlu9Xm9OMIS3PvG9U5373VzXqktSVgwnrRKE1dB0oZToZyk5ZKn3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607268 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607291 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607374 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607391 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c7ef241c--3ce4--53e3--9962--a0236c38cab6-osd--block--c7ef241c--3ce4--53e3--9962--a0236c38cab6', 'dm-uuid-LVM-HIYs3chgx9w0QZEoLwAI7WWwTHGM5AD06WmuLuFfZnhJzmBJxQa9IZ2hR7qsn9Rt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b', 'scsi-SQEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part1', 'scsi-SQEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part14', 'scsi-SQEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part15', 'scsi-SQEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part16', 'scsi-SQEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:10:18.607470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--53409cd5--715f--5221--bc58--8adc9fe4a6bc-osd--block--53409cd5--715f--5221--bc58--8adc9fe4a6bc', 'dm-uuid-LVM-uQFKQmydpWQsLnUFa0O91r217huYWLBPpRKPNOkZYm2ddggQo0qiQ3GpdWmYmqcX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607489 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--eb14b8c1--d757--5b78--a398--3e433d34ee3e-osd--block--eb14b8c1--d757--5b78--a398--3e433d34ee3e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eCPfZX-2Obe-Qkxq-eA0e-0CxC-TVhB-BfSZ3B', 'scsi-0QEMU_QEMU_HARDDISK_34a01356-b2ad-4692-b4fa-0e371ae7ecbd', 'scsi-SQEMU_QEMU_HARDDISK_34a01356-b2ad-4692-b4fa-0e371ae7ecbd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:10:18.607509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--55d6de5b--857a--5090--90bd--6b26b006e6c2-osd--block--55d6de5b--857a--5090--90bd--6b26b006e6c2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-alfQ1Y-Kvv2-D8lJ-HNk0-0GmX-PLlh-wukyi0', 'scsi-0QEMU_QEMU_HARDDISK_ca00bcd5-8e8a-4b90-8497-af6d74b86161', 'scsi-SQEMU_QEMU_HARDDISK_ca00bcd5-8e8a-4b90-8497-af6d74b86161'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:10:18.607533 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_04d2f464-e449-42d7-9ceb-0224b6b42ef4', 'scsi-SQEMU_QEMU_HARDDISK_04d2f464-e449-42d7-9ceb-0224b6b42ef4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:10:18.607557 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607599 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-19-06-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:10:18.607613 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607653 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607676 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2', 'scsi-SQEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part1', 'scsi-SQEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part14', 'scsi-SQEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part15', 'scsi-SQEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part16', 'scsi-SQEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:10:18.607722 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c7ef241c--3ce4--53e3--9962--a0236c38cab6-osd--block--c7ef241c--3ce4--53e3--9962--a0236c38cab6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KHL89F-O2YZ-U9aB-y3jM-YLBU-PA1u-5P96ej', 'scsi-0QEMU_QEMU_HARDDISK_e87b71fc-701a-46cb-bbd9-3f15f37c3043', 'scsi-SQEMU_QEMU_HARDDISK_e87b71fc-701a-46cb-bbd9-3f15f37c3043'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:10:18.607735 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--53409cd5--715f--5221--bc58--8adc9fe4a6bc-osd--block--53409cd5--715f--5221--bc58--8adc9fe4a6bc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZdUup2-oP2G-uJlD-mDPP-VpAJ-Acbk-ji6VVF', 'scsi-0QEMU_QEMU_HARDDISK_97094a75-4993-40db-897e-adadcd017b36', 'scsi-SQEMU_QEMU_HARDDISK_97094a75-4993-40db-897e-adadcd017b36'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:10:18.607747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d4a667e-1daa-4ea2-845b-5122e74908eb', 'scsi-SQEMU_QEMU_HARDDISK_9d4a667e-1daa-4ea2-845b-5122e74908eb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:10:18.607758 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.607770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-19-06-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:10:18.607781 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:10:18.607792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9e27190a--cad1--5451--a880--ae60fcff608c-osd--block--9e27190a--cad1--5451--a880--ae60fcff608c', 'dm-uuid-LVM-FrPe5ukHniNrH6lviJmTua1GloekeVZWHXqf71qIYfWnlrHYWae7nvsOEo1vSfYA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6f4317e9--8e5a--55d6--81df--460521249898-osd--block--6f4317e9--8e5a--55d6--81df--460521249898', 'dm-uuid-LVM-2b5UfzfqWpwbtNFwxAJo3rUUbWYFIFueNhzhEmmBgAXFJCQFegzZGzI75pKCFbW3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607874 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607896 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607907 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607918 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-13 20:10:18.607945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af', 'scsi-SQEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part1', 'scsi-SQEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part14', 'scsi-SQEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part15', 'scsi-SQEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part16', 'scsi-SQEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:10:18.607966 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9e27190a--cad1--5451--a880--ae60fcff608c-osd--block--9e27190a--cad1--5451--a880--ae60fcff608c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-i0C0RG-wQcy-1Jbz-VMJa-5NhQ-4ZiG-BdNyaC', 'scsi-0QEMU_QEMU_HARDDISK_0bd34d58-f920-45be-9e9c-4745e29ec711', 'scsi-SQEMU_QEMU_HARDDISK_0bd34d58-f920-45be-9e9c-4745e29ec711'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:10:18.607978 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6f4317e9--8e5a--55d6--81df--460521249898-osd--block--6f4317e9--8e5a--55d6--81df--460521249898'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sYEEX0-cwtG-HcvZ-EkWI-2rqr-mPns-GCjGTv', 'scsi-0QEMU_QEMU_HARDDISK_5a89f530-918e-4949-9347-1038fd288b0d', 'scsi-SQEMU_QEMU_HARDDISK_5a89f530-918e-4949-9347-1038fd288b0d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:10:18.607990 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10c33077-7b2d-46df-acf0-04e3d7859f61', 'scsi-SQEMU_QEMU_HARDDISK_10c33077-7b2d-46df-acf0-04e3d7859f61'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:10:18.608013 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-19-06-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-13 20:10:18.608025 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:10:18.608036 | orchestrator | 2025-05-13 20:10:18.608047 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-05-13 20:10:18.608058 | orchestrator | Tuesday 13 May 2025 20:08:26 +0000 (0:00:00.571) 0:00:16.167 *********** 2025-05-13 20:10:18.608074 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eb14b8c1--d757--5b78--a398--3e433d34ee3e-osd--block--eb14b8c1--d757--5b78--a398--3e433d34ee3e', 'dm-uuid-LVM-rUzZXZKL8QvWDWEmhCrsMJVItcd4niAXg5NokKKGy3QkHSq9S0nIJSi2Q21T5NwR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608086 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--55d6de5b--857a--5090--90bd--6b26b006e6c2-osd--block--55d6de5b--857a--5090--90bd--6b26b006e6c2', 'dm-uuid-LVM-vBJkM2Ms9xoHjlu9Xm9OMIS3PvG9U5373VzXqktSVgwnrRKE1dB0oZToZyk5ZKn3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608098 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608109 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608121 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608150 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608202 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608223 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608239 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608251 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608262 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c7ef241c--3ce4--53e3--9962--a0236c38cab6-osd--block--c7ef241c--3ce4--53e3--9962--a0236c38cab6', 'dm-uuid-LVM-HIYs3chgx9w0QZEoLwAI7WWwTHGM5AD06WmuLuFfZnhJzmBJxQa9IZ2hR7qsn9Rt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608298 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b', 'scsi-SQEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part1', 'scsi-SQEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part14', 'scsi-SQEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part15', 'scsi-SQEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part16', 'scsi-SQEMU_QEMU_HARDDISK_549d2c5e-fb0a-4dd2-8ec5-7d721ec5bb2b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608312 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--53409cd5--715f--5221--bc58--8adc9fe4a6bc-osd--block--53409cd5--715f--5221--bc58--8adc9fe4a6bc', 'dm-uuid-LVM-uQFKQmydpWQsLnUFa0O91r217huYWLBPpRKPNOkZYm2ddggQo0qiQ3GpdWmYmqcX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608323 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--eb14b8c1--d757--5b78--a398--3e433d34ee3e-osd--block--eb14b8c1--d757--5b78--a398--3e433d34ee3e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eCPfZX-2Obe-Qkxq-eA0e-0CxC-TVhB-BfSZ3B', 'scsi-0QEMU_QEMU_HARDDISK_34a01356-b2ad-4692-b4fa-0e371ae7ecbd', 'scsi-SQEMU_QEMU_HARDDISK_34a01356-b2ad-4692-b4fa-0e371ae7ecbd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608350 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--55d6de5b--857a--5090--90bd--6b26b006e6c2-osd--block--55d6de5b--857a--5090--90bd--6b26b006e6c2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-alfQ1Y-Kvv2-D8lJ-HNk0-0GmX-PLlh-wukyi0', 'scsi-0QEMU_QEMU_HARDDISK_ca00bcd5-8e8a-4b90-8497-af6d74b86161', 'scsi-SQEMU_QEMU_HARDDISK_ca00bcd5-8e8a-4b90-8497-af6d74b86161'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608367 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608378 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_04d2f464-e449-42d7-9ceb-0224b6b42ef4', 'scsi-SQEMU_QEMU_HARDDISK_04d2f464-e449-42d7-9ceb-0224b6b42ef4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608390 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608401 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-19-06-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608423 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608434 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.608451 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608468 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608479 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608490 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9e27190a--cad1--5451--a880--ae60fcff608c-osd--block--9e27190a--cad1--5451--a880--ae60fcff608c', 'dm-uuid-LVM-FrPe5ukHniNrH6lviJmTua1GloekeVZWHXqf71qIYfWnlrHYWae7nvsOEo1vSfYA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608501 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608520 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608538 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6f4317e9--8e5a--55d6--81df--460521249898-osd--block--6f4317e9--8e5a--55d6--81df--460521249898', 'dm-uuid-LVM-2b5UfzfqWpwbtNFwxAJo3rUUbWYFIFueNhzhEmmBgAXFJCQFegzZGzI75pKCFbW3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608555 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608567 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2', 'scsi-SQEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part1', 'scsi-SQEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part14', 'scsi-SQEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part15', 'scsi-SQEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part16', 'scsi-SQEMU_QEMU_HARDDISK_41c94169-cd66-4abb-b62b-5ec1ccb982a2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608587 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608611 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c7ef241c--3ce4--53e3--9962--a0236c38cab6-osd--block--c7ef241c--3ce4--53e3--9962--a0236c38cab6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KHL89F-O2YZ-U9aB-y3jM-YLBU-PA1u-5P96ej', 'scsi-0QEMU_QEMU_HARDDISK_e87b71fc-701a-46cb-bbd9-3f15f37c3043', 'scsi-SQEMU_QEMU_HARDDISK_e87b71fc-701a-46cb-bbd9-3f15f37c3043'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608623 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608635 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--53409cd5--715f--5221--bc58--8adc9fe4a6bc-osd--block--53409cd5--715f--5221--bc58--8adc9fe4a6bc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZdUup2-oP2G-uJlD-mDPP-VpAJ-Acbk-ji6VVF', 'scsi-0QEMU_QEMU_HARDDISK_97094a75-4993-40db-897e-adadcd017b36', 'scsi-SQEMU_QEMU_HARDDISK_97094a75-4993-40db-897e-adadcd017b36'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608646 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d4a667e-1daa-4ea2-845b-5122e74908eb', 'scsi-SQEMU_QEMU_HARDDISK_9d4a667e-1daa-4ea2-845b-5122e74908eb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608664 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608681 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-19-06-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608699 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608710 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:10:18.608721 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608733 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608744 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608776 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af', 'scsi-SQEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part1', 'scsi-SQEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part14', 'scsi-SQEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part15', 'scsi-SQEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part16', 'scsi-SQEMU_QEMU_HARDDISK_0d5abef6-0ff0-4989-a4ff-307849d725af-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608790 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9e27190a--cad1--5451--a880--ae60fcff608c-osd--block--9e27190a--cad1--5451--a880--ae60fcff608c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-i0C0RG-wQcy-1Jbz-VMJa-5NhQ-4ZiG-BdNyaC', 'scsi-0QEMU_QEMU_HARDDISK_0bd34d58-f920-45be-9e9c-4745e29ec711', 'scsi-SQEMU_QEMU_HARDDISK_0bd34d58-f920-45be-9e9c-4745e29ec711'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608801 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6f4317e9--8e5a--55d6--81df--460521249898-osd--block--6f4317e9--8e5a--55d6--81df--460521249898'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sYEEX0-cwtG-HcvZ-EkWI-2rqr-mPns-GCjGTv', 'scsi-0QEMU_QEMU_HARDDISK_5a89f530-918e-4949-9347-1038fd288b0d', 'scsi-SQEMU_QEMU_HARDDISK_5a89f530-918e-4949-9347-1038fd288b0d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608819 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10c33077-7b2d-46df-acf0-04e3d7859f61', 'scsi-SQEMU_QEMU_HARDDISK_10c33077-7b2d-46df-acf0-04e3d7859f61'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608838 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-13-19-06-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-13 20:10:18.608850 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:10:18.608860 | orchestrator | 2025-05-13 20:10:18.608871 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-05-13 20:10:18.608883 | orchestrator | Tuesday 13 May 2025 20:08:27 +0000 (0:00:00.542) 0:00:16.710 *********** 2025-05-13 20:10:18.608899 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:10:18.608910 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:10:18.608921 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:10:18.608931 | orchestrator | 2025-05-13 20:10:18.608942 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-05-13 20:10:18.608952 | orchestrator | Tuesday 13 May 2025 20:08:27 +0000 (0:00:00.660) 0:00:17.371 *********** 2025-05-13 20:10:18.608963 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:10:18.608974 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:10:18.608984 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:10:18.608995 | orchestrator | 2025-05-13 20:10:18.609005 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-13 20:10:18.609016 | orchestrator | Tuesday 13 May 2025 20:08:28 +0000 (0:00:00.459) 0:00:17.830 *********** 2025-05-13 20:10:18.609026 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:10:18.609037 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:10:18.609047 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:10:18.609058 | orchestrator | 2025-05-13 20:10:18.609069 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-13 20:10:18.609079 | orchestrator | Tuesday 13 May 2025 20:08:28 +0000 (0:00:00.646) 0:00:18.476 *********** 2025-05-13 20:10:18.609090 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.609100 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:10:18.609111 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:10:18.609128 | orchestrator | 2025-05-13 20:10:18.609139 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-13 20:10:18.609150 | orchestrator | Tuesday 13 May 2025 20:08:29 +0000 (0:00:00.279) 0:00:18.755 *********** 2025-05-13 20:10:18.609160 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.609209 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:10:18.609220 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:10:18.609231 | orchestrator | 2025-05-13 20:10:18.609242 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-13 20:10:18.609252 | orchestrator | Tuesday 13 May 2025 20:08:29 +0000 (0:00:00.374) 0:00:19.129 *********** 2025-05-13 20:10:18.609263 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.609273 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:10:18.609284 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:10:18.609294 | orchestrator | 2025-05-13 20:10:18.609305 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-05-13 20:10:18.609315 | orchestrator | Tuesday 13 May 2025 20:08:30 +0000 (0:00:00.522) 0:00:19.652 *********** 2025-05-13 20:10:18.609326 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-13 20:10:18.609337 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-13 20:10:18.609347 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-13 20:10:18.609358 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-13 20:10:18.609369 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-13 20:10:18.609379 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-13 20:10:18.609390 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-13 20:10:18.609400 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-13 20:10:18.609411 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-13 20:10:18.609421 | orchestrator | 2025-05-13 20:10:18.609432 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-05-13 20:10:18.609443 | orchestrator | Tuesday 13 May 2025 20:08:31 +0000 (0:00:00.875) 0:00:20.527 *********** 2025-05-13 20:10:18.609453 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-13 20:10:18.609464 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-13 20:10:18.609475 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-13 20:10:18.609485 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.609496 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-13 20:10:18.609507 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-13 20:10:18.609517 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-13 20:10:18.609528 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:10:18.609539 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-13 20:10:18.609549 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-13 20:10:18.609560 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-13 20:10:18.609570 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:10:18.609581 | orchestrator | 2025-05-13 20:10:18.609591 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-05-13 20:10:18.609602 | orchestrator | Tuesday 13 May 2025 20:08:31 +0000 (0:00:00.353) 0:00:20.880 *********** 2025-05-13 20:10:18.609613 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:10:18.609624 | orchestrator | 2025-05-13 20:10:18.609635 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-13 20:10:18.609647 | orchestrator | Tuesday 13 May 2025 20:08:32 +0000 (0:00:00.667) 0:00:21.547 *********** 2025-05-13 20:10:18.609658 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.609668 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:10:18.609679 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:10:18.609697 | orchestrator | 2025-05-13 20:10:18.609714 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-13 20:10:18.609725 | orchestrator | Tuesday 13 May 2025 20:08:32 +0000 (0:00:00.329) 0:00:21.877 *********** 2025-05-13 20:10:18.609736 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.609747 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:10:18.609758 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:10:18.609768 | orchestrator | 2025-05-13 20:10:18.609779 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-13 20:10:18.609790 | orchestrator | Tuesday 13 May 2025 20:08:32 +0000 (0:00:00.298) 0:00:22.175 *********** 2025-05-13 20:10:18.609800 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.609816 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:10:18.609828 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:10:18.609838 | orchestrator | 2025-05-13 20:10:18.609849 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-05-13 20:10:18.609860 | orchestrator | Tuesday 13 May 2025 20:08:32 +0000 (0:00:00.326) 0:00:22.502 *********** 2025-05-13 20:10:18.609871 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:10:18.609881 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:10:18.609892 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:10:18.609903 | orchestrator | 2025-05-13 20:10:18.609914 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-05-13 20:10:18.609925 | orchestrator | Tuesday 13 May 2025 20:08:33 +0000 (0:00:00.580) 0:00:23.083 *********** 2025-05-13 20:10:18.609935 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 20:10:18.609946 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 20:10:18.609957 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 20:10:18.609967 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.609978 | orchestrator | 2025-05-13 20:10:18.609989 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-13 20:10:18.610000 | orchestrator | Tuesday 13 May 2025 20:08:33 +0000 (0:00:00.372) 0:00:23.456 *********** 2025-05-13 20:10:18.610010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 20:10:18.610082 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 20:10:18.610093 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 20:10:18.610104 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.610114 | orchestrator | 2025-05-13 20:10:18.610125 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-13 20:10:18.610136 | orchestrator | Tuesday 13 May 2025 20:08:34 +0000 (0:00:00.360) 0:00:23.816 *********** 2025-05-13 20:10:18.610146 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-13 20:10:18.610157 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-13 20:10:18.610194 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-13 20:10:18.610205 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.610216 | orchestrator | 2025-05-13 20:10:18.610226 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-05-13 20:10:18.610237 | orchestrator | Tuesday 13 May 2025 20:08:34 +0000 (0:00:00.481) 0:00:24.298 *********** 2025-05-13 20:10:18.610247 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:10:18.610258 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:10:18.610269 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:10:18.610279 | orchestrator | 2025-05-13 20:10:18.610290 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-05-13 20:10:18.610300 | orchestrator | Tuesday 13 May 2025 20:08:35 +0000 (0:00:00.296) 0:00:24.594 *********** 2025-05-13 20:10:18.610311 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-13 20:10:18.610322 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-13 20:10:18.610333 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-13 20:10:18.610343 | orchestrator | 2025-05-13 20:10:18.610365 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-05-13 20:10:18.610376 | orchestrator | Tuesday 13 May 2025 20:08:35 +0000 (0:00:00.501) 0:00:25.096 *********** 2025-05-13 20:10:18.610386 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-13 20:10:18.610397 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-13 20:10:18.610407 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-13 20:10:18.610418 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-13 20:10:18.610429 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-13 20:10:18.610440 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-13 20:10:18.610450 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-13 20:10:18.610461 | orchestrator | 2025-05-13 20:10:18.610471 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-05-13 20:10:18.610482 | orchestrator | Tuesday 13 May 2025 20:08:36 +0000 (0:00:00.950) 0:00:26.047 *********** 2025-05-13 20:10:18.610492 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-13 20:10:18.610503 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-13 20:10:18.610514 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-13 20:10:18.610524 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-13 20:10:18.610535 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-13 20:10:18.610545 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-13 20:10:18.610556 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-13 20:10:18.610567 | orchestrator | 2025-05-13 20:10:18.610584 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-05-13 20:10:18.610595 | orchestrator | Tuesday 13 May 2025 20:08:38 +0000 (0:00:01.904) 0:00:27.951 *********** 2025-05-13 20:10:18.610606 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:10:18.610616 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:10:18.610627 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-05-13 20:10:18.610638 | orchestrator | 2025-05-13 20:10:18.610648 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-05-13 20:10:18.610659 | orchestrator | Tuesday 13 May 2025 20:08:38 +0000 (0:00:00.401) 0:00:28.353 *********** 2025-05-13 20:10:18.610676 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-13 20:10:18.610688 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-13 20:10:18.610699 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-13 20:10:18.610711 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-13 20:10:18.610729 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-13 20:10:18.610740 | orchestrator | 2025-05-13 20:10:18.610751 | orchestrator | TASK [generate keys] *********************************************************** 2025-05-13 20:10:18.610761 | orchestrator | Tuesday 13 May 2025 20:09:23 +0000 (0:00:44.888) 0:01:13.242 *********** 2025-05-13 20:10:18.610772 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:10:18.610783 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:10:18.610793 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:10:18.610804 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:10:18.610814 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:10:18.610825 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:10:18.610836 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-05-13 20:10:18.610846 | orchestrator | 2025-05-13 20:10:18.610857 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-05-13 20:10:18.610867 | orchestrator | Tuesday 13 May 2025 20:09:47 +0000 (0:00:23.792) 0:01:37.034 *********** 2025-05-13 20:10:18.610878 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:10:18.610888 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:10:18.610899 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:10:18.610909 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:10:18.610920 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:10:18.610930 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:10:18.610941 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-13 20:10:18.610951 | orchestrator | 2025-05-13 20:10:18.610962 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-05-13 20:10:18.610973 | orchestrator | Tuesday 13 May 2025 20:09:59 +0000 (0:00:11.689) 0:01:48.724 *********** 2025-05-13 20:10:18.610983 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:10:18.610994 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-13 20:10:18.611004 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-13 20:10:18.611015 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:10:18.611025 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-13 20:10:18.611036 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-13 20:10:18.611053 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:10:18.611063 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-13 20:10:18.611074 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-13 20:10:18.611085 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:10:18.611095 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-13 20:10:18.611106 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-13 20:10:18.611122 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:10:18.611141 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-13 20:10:18.611152 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-13 20:10:18.611187 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-13 20:10:18.611208 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-13 20:10:18.611227 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-13 20:10:18.611245 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-05-13 20:10:18.611263 | orchestrator | 2025-05-13 20:10:18.611274 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:10:18.611285 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-05-13 20:10:18.611297 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-05-13 20:10:18.611308 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-13 20:10:18.611319 | orchestrator | 2025-05-13 20:10:18.611330 | orchestrator | 2025-05-13 20:10:18.611340 | orchestrator | 2025-05-13 20:10:18.611351 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:10:18.611361 | orchestrator | Tuesday 13 May 2025 20:10:16 +0000 (0:00:17.438) 0:02:06.162 *********** 2025-05-13 20:10:18.611372 | orchestrator | =============================================================================== 2025-05-13 20:10:18.611383 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.89s 2025-05-13 20:10:18.611394 | orchestrator | generate keys ---------------------------------------------------------- 23.79s 2025-05-13 20:10:18.611404 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.44s 2025-05-13 20:10:18.611415 | orchestrator | get keys from monitors ------------------------------------------------- 11.69s 2025-05-13 20:10:18.611425 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.03s 2025-05-13 20:10:18.611436 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.90s 2025-05-13 20:10:18.611447 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.68s 2025-05-13 20:10:18.611457 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.95s 2025-05-13 20:10:18.611468 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.88s 2025-05-13 20:10:18.611478 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.83s 2025-05-13 20:10:18.611489 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.74s 2025-05-13 20:10:18.611500 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.67s 2025-05-13 20:10:18.611510 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.67s 2025-05-13 20:10:18.611521 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.66s 2025-05-13 20:10:18.611531 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.65s 2025-05-13 20:10:18.611542 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.65s 2025-05-13 20:10:18.611552 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.62s 2025-05-13 20:10:18.611563 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.58s 2025-05-13 20:10:18.611573 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.57s 2025-05-13 20:10:18.611584 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.54s 2025-05-13 20:10:18.611594 | orchestrator | 2025-05-13 20:10:18 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:10:18.611614 | orchestrator | 2025-05-13 20:10:18 | INFO  | Task 10f052ae-6ea4-4ddf-80d9-355d23f64cad is in state STARTED 2025-05-13 20:10:18.611625 | orchestrator | 2025-05-13 20:10:18 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:10:21.632324 | orchestrator | 2025-05-13 20:10:21 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:10:21.632693 | orchestrator | 2025-05-13 20:10:21 | INFO  | Task 8e21ecf0-9f0f-444b-821b-c0654021a7b7 is in state STARTED 2025-05-13 20:10:21.633385 | orchestrator | 2025-05-13 20:10:21 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:10:21.633861 | orchestrator | 2025-05-13 20:10:21 | INFO  | Task 10f052ae-6ea4-4ddf-80d9-355d23f64cad is in state SUCCESS 2025-05-13 20:10:21.633886 | orchestrator | 2025-05-13 20:10:21 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:10:24.675954 | orchestrator | 2025-05-13 20:10:24 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:10:24.678319 | orchestrator | 2025-05-13 20:10:24 | INFO  | Task 8e21ecf0-9f0f-444b-821b-c0654021a7b7 is in state STARTED 2025-05-13 20:10:24.680935 | orchestrator | 2025-05-13 20:10:24 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:10:24.681597 | orchestrator | 2025-05-13 20:10:24 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:10:27.728683 | orchestrator | 2025-05-13 20:10:27 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:10:27.728794 | orchestrator | 2025-05-13 20:10:27 | INFO  | Task 8e21ecf0-9f0f-444b-821b-c0654021a7b7 is in state STARTED 2025-05-13 20:10:27.729691 | orchestrator | 2025-05-13 20:10:27 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:10:27.729723 | orchestrator | 2025-05-13 20:10:27 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:10:30.781379 | orchestrator | 2025-05-13 20:10:30 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:10:30.781603 | orchestrator | 2025-05-13 20:10:30 | INFO  | Task 8e21ecf0-9f0f-444b-821b-c0654021a7b7 is in state STARTED 2025-05-13 20:10:30.782455 | orchestrator | 2025-05-13 20:10:30 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:10:30.782483 | orchestrator | 2025-05-13 20:10:30 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:10:33.821331 | orchestrator | 2025-05-13 20:10:33 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:10:33.821850 | orchestrator | 2025-05-13 20:10:33 | INFO  | Task 8e21ecf0-9f0f-444b-821b-c0654021a7b7 is in state STARTED 2025-05-13 20:10:33.823447 | orchestrator | 2025-05-13 20:10:33 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:10:33.823479 | orchestrator | 2025-05-13 20:10:33 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:10:36.869000 | orchestrator | 2025-05-13 20:10:36 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:10:36.869416 | orchestrator | 2025-05-13 20:10:36 | INFO  | Task 8e21ecf0-9f0f-444b-821b-c0654021a7b7 is in state STARTED 2025-05-13 20:10:36.871734 | orchestrator | 2025-05-13 20:10:36 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:10:36.871772 | orchestrator | 2025-05-13 20:10:36 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:10:39.915474 | orchestrator | 2025-05-13 20:10:39 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:10:39.916047 | orchestrator | 2025-05-13 20:10:39 | INFO  | Task 8e21ecf0-9f0f-444b-821b-c0654021a7b7 is in state STARTED 2025-05-13 20:10:39.919543 | orchestrator | 2025-05-13 20:10:39 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:10:39.919581 | orchestrator | 2025-05-13 20:10:39 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:10:42.957822 | orchestrator | 2025-05-13 20:10:42 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:10:42.961686 | orchestrator | 2025-05-13 20:10:42 | INFO  | Task 8e21ecf0-9f0f-444b-821b-c0654021a7b7 is in state STARTED 2025-05-13 20:10:42.961744 | orchestrator | 2025-05-13 20:10:42 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:10:42.961757 | orchestrator | 2025-05-13 20:10:42 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:10:46.012080 | orchestrator | 2025-05-13 20:10:46 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:10:46.014435 | orchestrator | 2025-05-13 20:10:46 | INFO  | Task 8e21ecf0-9f0f-444b-821b-c0654021a7b7 is in state STARTED 2025-05-13 20:10:46.015733 | orchestrator | 2025-05-13 20:10:46 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:10:46.015775 | orchestrator | 2025-05-13 20:10:46 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:10:49.062768 | orchestrator | 2025-05-13 20:10:49 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:10:49.062843 | orchestrator | 2025-05-13 20:10:49 | INFO  | Task 8e21ecf0-9f0f-444b-821b-c0654021a7b7 is in state STARTED 2025-05-13 20:10:49.063699 | orchestrator | 2025-05-13 20:10:49 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:10:49.063788 | orchestrator | 2025-05-13 20:10:49 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:10:52.099478 | orchestrator | 2025-05-13 20:10:52 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:10:52.101899 | orchestrator | 2025-05-13 20:10:52 | INFO  | Task 8e21ecf0-9f0f-444b-821b-c0654021a7b7 is in state STARTED 2025-05-13 20:10:52.104437 | orchestrator | 2025-05-13 20:10:52 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:10:52.104556 | orchestrator | 2025-05-13 20:10:52 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:10:55.169748 | orchestrator | 2025-05-13 20:10:55 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:10:55.169985 | orchestrator | 2025-05-13 20:10:55 | INFO  | Task 8e21ecf0-9f0f-444b-821b-c0654021a7b7 is in state SUCCESS 2025-05-13 20:10:55.171250 | orchestrator | 2025-05-13 20:10:55 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:10:55.171959 | orchestrator | 2025-05-13 20:10:55 | INFO  | Task 2f5bd649-1555-43fe-aca2-57b1554eeaa5 is in state STARTED 2025-05-13 20:10:55.172278 | orchestrator | 2025-05-13 20:10:55 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:10:58.217821 | orchestrator | 2025-05-13 20:10:58 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:10:58.217961 | orchestrator | 2025-05-13 20:10:58 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:10:58.221380 | orchestrator | 2025-05-13 20:10:58 | INFO  | Task 2f5bd649-1555-43fe-aca2-57b1554eeaa5 is in state STARTED 2025-05-13 20:10:58.221486 | orchestrator | 2025-05-13 20:10:58 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:11:01.271058 | orchestrator | 2025-05-13 20:11:01 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:11:01.271298 | orchestrator | 2025-05-13 20:11:01 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:11:01.271330 | orchestrator | 2025-05-13 20:11:01 | INFO  | Task 2f5bd649-1555-43fe-aca2-57b1554eeaa5 is in state STARTED 2025-05-13 20:11:01.271351 | orchestrator | 2025-05-13 20:11:01 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:11:04.329668 | orchestrator | 2025-05-13 20:11:04 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:11:04.330194 | orchestrator | 2025-05-13 20:11:04 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:11:04.330953 | orchestrator | 2025-05-13 20:11:04 | INFO  | Task 2f5bd649-1555-43fe-aca2-57b1554eeaa5 is in state STARTED 2025-05-13 20:11:04.331086 | orchestrator | 2025-05-13 20:11:04 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:11:07.403930 | orchestrator | 2025-05-13 20:11:07 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:11:07.407468 | orchestrator | 2025-05-13 20:11:07 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:11:07.409402 | orchestrator | 2025-05-13 20:11:07 | INFO  | Task 2f5bd649-1555-43fe-aca2-57b1554eeaa5 is in state STARTED 2025-05-13 20:11:07.409912 | orchestrator | 2025-05-13 20:11:07 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:11:10.473123 | orchestrator | 2025-05-13 20:11:10 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:11:10.473271 | orchestrator | 2025-05-13 20:11:10 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:11:10.473288 | orchestrator | 2025-05-13 20:11:10 | INFO  | Task 2f5bd649-1555-43fe-aca2-57b1554eeaa5 is in state STARTED 2025-05-13 20:11:10.473301 | orchestrator | 2025-05-13 20:11:10 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:11:13.523640 | orchestrator | 2025-05-13 20:11:13 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:11:13.523917 | orchestrator | 2025-05-13 20:11:13 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:11:13.526824 | orchestrator | 2025-05-13 20:11:13 | INFO  | Task 2f5bd649-1555-43fe-aca2-57b1554eeaa5 is in state STARTED 2025-05-13 20:11:13.526857 | orchestrator | 2025-05-13 20:11:13 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:11:16.592653 | orchestrator | 2025-05-13 20:11:16 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:11:16.596408 | orchestrator | 2025-05-13 20:11:16 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:11:16.598645 | orchestrator | 2025-05-13 20:11:16 | INFO  | Task 2f5bd649-1555-43fe-aca2-57b1554eeaa5 is in state STARTED 2025-05-13 20:11:16.598899 | orchestrator | 2025-05-13 20:11:16 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:11:19.640551 | orchestrator | 2025-05-13 20:11:19 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:11:19.641606 | orchestrator | 2025-05-13 20:11:19 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:11:19.643666 | orchestrator | 2025-05-13 20:11:19 | INFO  | Task 2f5bd649-1555-43fe-aca2-57b1554eeaa5 is in state STARTED 2025-05-13 20:11:19.644225 | orchestrator | 2025-05-13 20:11:19 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:11:22.693436 | orchestrator | 2025-05-13 20:11:22 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:11:22.695560 | orchestrator | 2025-05-13 20:11:22 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:11:22.698144 | orchestrator | 2025-05-13 20:11:22 | INFO  | Task 2f5bd649-1555-43fe-aca2-57b1554eeaa5 is in state STARTED 2025-05-13 20:11:22.698612 | orchestrator | 2025-05-13 20:11:22 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:11:25.756490 | orchestrator | 2025-05-13 20:11:25 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:11:25.760081 | orchestrator | 2025-05-13 20:11:25 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:11:25.761967 | orchestrator | 2025-05-13 20:11:25 | INFO  | Task 2f5bd649-1555-43fe-aca2-57b1554eeaa5 is in state STARTED 2025-05-13 20:11:25.762423 | orchestrator | 2025-05-13 20:11:25 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:11:28.811330 | orchestrator | 2025-05-13 20:11:28 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:11:28.812455 | orchestrator | 2025-05-13 20:11:28 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:11:28.815970 | orchestrator | 2025-05-13 20:11:28 | INFO  | Task 2f5bd649-1555-43fe-aca2-57b1554eeaa5 is in state STARTED 2025-05-13 20:11:28.816042 | orchestrator | 2025-05-13 20:11:28 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:11:31.873127 | orchestrator | 2025-05-13 20:11:31 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:11:31.874355 | orchestrator | 2025-05-13 20:11:31 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:11:31.876287 | orchestrator | 2025-05-13 20:11:31 | INFO  | Task 2f5bd649-1555-43fe-aca2-57b1554eeaa5 is in state STARTED 2025-05-13 20:11:31.876339 | orchestrator | 2025-05-13 20:11:31 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:11:34.931590 | orchestrator | 2025-05-13 20:11:34 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:11:34.933791 | orchestrator | 2025-05-13 20:11:34 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:11:34.935662 | orchestrator | 2025-05-13 20:11:34 | INFO  | Task 2f5bd649-1555-43fe-aca2-57b1554eeaa5 is in state STARTED 2025-05-13 20:11:34.935824 | orchestrator | 2025-05-13 20:11:34 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:11:37.990463 | orchestrator | 2025-05-13 20:11:37 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:11:37.990559 | orchestrator | 2025-05-13 20:11:37 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:11:37.990573 | orchestrator | 2025-05-13 20:11:37 | INFO  | Task 2f5bd649-1555-43fe-aca2-57b1554eeaa5 is in state STARTED 2025-05-13 20:11:37.990585 | orchestrator | 2025-05-13 20:11:37 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:11:41.042878 | orchestrator | 2025-05-13 20:11:41 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:11:41.043298 | orchestrator | 2025-05-13 20:11:41 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:11:41.045682 | orchestrator | 2025-05-13 20:11:41 | INFO  | Task 2f5bd649-1555-43fe-aca2-57b1554eeaa5 is in state STARTED 2025-05-13 20:11:41.045747 | orchestrator | 2025-05-13 20:11:41 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:11:44.092429 | orchestrator | 2025-05-13 20:11:44 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:11:44.093566 | orchestrator | 2025-05-13 20:11:44 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:11:44.095570 | orchestrator | 2025-05-13 20:11:44 | INFO  | Task 2f5bd649-1555-43fe-aca2-57b1554eeaa5 is in state STARTED 2025-05-13 20:11:44.095637 | orchestrator | 2025-05-13 20:11:44 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:11:47.137399 | orchestrator | 2025-05-13 20:11:47 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:11:47.139067 | orchestrator | 2025-05-13 20:11:47 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:11:47.141810 | orchestrator | 2025-05-13 20:11:47 | INFO  | Task 2f5bd649-1555-43fe-aca2-57b1554eeaa5 is in state STARTED 2025-05-13 20:11:47.142084 | orchestrator | 2025-05-13 20:11:47 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:11:50.188542 | orchestrator | 2025-05-13 20:11:50 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:11:50.189396 | orchestrator | 2025-05-13 20:11:50 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:11:50.189462 | orchestrator | 2025-05-13 20:11:50 | INFO  | Task 2f5bd649-1555-43fe-aca2-57b1554eeaa5 is in state STARTED 2025-05-13 20:11:50.189474 | orchestrator | 2025-05-13 20:11:50 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:11:53.242299 | orchestrator | 2025-05-13 20:11:53 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:11:53.244772 | orchestrator | 2025-05-13 20:11:53 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:11:53.247891 | orchestrator | 2025-05-13 20:11:53 | INFO  | Task 2f5bd649-1555-43fe-aca2-57b1554eeaa5 is in state STARTED 2025-05-13 20:11:53.247933 | orchestrator | 2025-05-13 20:11:53 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:11:56.311401 | orchestrator | 2025-05-13 20:11:56 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:11:56.313665 | orchestrator | 2025-05-13 20:11:56 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:11:56.314559 | orchestrator | 2025-05-13 20:11:56 | INFO  | Task 2f5bd649-1555-43fe-aca2-57b1554eeaa5 is in state STARTED 2025-05-13 20:11:56.314623 | orchestrator | 2025-05-13 20:11:56 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:11:59.377651 | orchestrator | 2025-05-13 20:11:59 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:11:59.379441 | orchestrator | 2025-05-13 20:11:59 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:11:59.380737 | orchestrator | 2025-05-13 20:11:59 | INFO  | Task 2f5bd649-1555-43fe-aca2-57b1554eeaa5 is in state STARTED 2025-05-13 20:11:59.380977 | orchestrator | 2025-05-13 20:11:59 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:12:02.434533 | orchestrator | 2025-05-13 20:12:02 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:12:02.435827 | orchestrator | 2025-05-13 20:12:02 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:12:02.437939 | orchestrator | 2025-05-13 20:12:02 | INFO  | Task 2f5bd649-1555-43fe-aca2-57b1554eeaa5 is in state STARTED 2025-05-13 20:12:02.437983 | orchestrator | 2025-05-13 20:12:02 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:12:05.489536 | orchestrator | 2025-05-13 20:12:05 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:12:05.491555 | orchestrator | 2025-05-13 20:12:05 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:12:05.493811 | orchestrator | 2025-05-13 20:12:05 | INFO  | Task 2f5bd649-1555-43fe-aca2-57b1554eeaa5 is in state STARTED 2025-05-13 20:12:05.493896 | orchestrator | 2025-05-13 20:12:05 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:12:08.548525 | orchestrator | 2025-05-13 20:12:08 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:12:08.548746 | orchestrator | 2025-05-13 20:12:08 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:12:08.556411 | orchestrator | 2025-05-13 20:12:08.556504 | orchestrator | None 2025-05-13 20:12:08.556517 | orchestrator | 2025-05-13 20:12:08.556528 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-05-13 20:12:08.556538 | orchestrator | 2025-05-13 20:12:08.556548 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-05-13 20:12:08.556559 | orchestrator | Tuesday 13 May 2025 20:10:21 +0000 (0:00:01.463) 0:00:01.463 *********** 2025-05-13 20:12:08.556569 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-05-13 20:12:08.556580 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-05-13 20:12:08.556589 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-05-13 20:12:08.556599 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-05-13 20:12:08.556624 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-05-13 20:12:08.556636 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-05-13 20:12:08.556646 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-05-13 20:12:08.556655 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-05-13 20:12:08.556664 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-05-13 20:12:08.556674 | orchestrator | 2025-05-13 20:12:08.556683 | orchestrator | TASK [Create share directory] ************************************************** 2025-05-13 20:12:08.556693 | orchestrator | Tuesday 13 May 2025 20:10:26 +0000 (0:00:05.418) 0:00:06.882 *********** 2025-05-13 20:12:08.556703 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-13 20:12:08.556712 | orchestrator | 2025-05-13 20:12:08.556722 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-05-13 20:12:08.556731 | orchestrator | Tuesday 13 May 2025 20:10:29 +0000 (0:00:02.216) 0:00:09.098 *********** 2025-05-13 20:12:08.556741 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-13 20:12:08.556751 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-13 20:12:08.556760 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-13 20:12:08.556769 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-13 20:12:08.556778 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-13 20:12:08.556788 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-13 20:12:08.556800 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-13 20:12:08.556821 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-13 20:12:08.556843 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-13 20:12:08.556858 | orchestrator | 2025-05-13 20:12:08.556873 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-05-13 20:12:08.556888 | orchestrator | Tuesday 13 May 2025 20:10:44 +0000 (0:00:14.871) 0:00:23.970 *********** 2025-05-13 20:12:08.556904 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-05-13 20:12:08.556919 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-05-13 20:12:08.556962 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-05-13 20:12:08.556979 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-05-13 20:12:08.556995 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-05-13 20:12:08.557013 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-05-13 20:12:08.557029 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-05-13 20:12:08.557047 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-05-13 20:12:08.557058 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-05-13 20:12:08.557069 | orchestrator | 2025-05-13 20:12:08.557081 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:12:08.557093 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:12:08.557105 | orchestrator | 2025-05-13 20:12:08.557117 | orchestrator | 2025-05-13 20:12:08.557128 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:12:08.557169 | orchestrator | Tuesday 13 May 2025 20:10:51 +0000 (0:00:07.823) 0:00:31.793 *********** 2025-05-13 20:12:08.557181 | orchestrator | =============================================================================== 2025-05-13 20:12:08.557192 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.87s 2025-05-13 20:12:08.557202 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.82s 2025-05-13 20:12:08.557213 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 5.42s 2025-05-13 20:12:08.557224 | orchestrator | Create share directory -------------------------------------------------- 2.22s 2025-05-13 20:12:08.557235 | orchestrator | 2025-05-13 20:12:08.557245 | orchestrator | 2025-05-13 20:12:08.557256 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-05-13 20:12:08.557267 | orchestrator | 2025-05-13 20:12:08.557295 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-05-13 20:12:08.557306 | orchestrator | Tuesday 13 May 2025 20:10:59 +0000 (0:00:02.506) 0:00:02.506 *********** 2025-05-13 20:12:08.557316 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-05-13 20:12:08.557327 | orchestrator | 2025-05-13 20:12:08.557337 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-05-13 20:12:08.557352 | orchestrator | Tuesday 13 May 2025 20:11:00 +0000 (0:00:01.714) 0:00:04.221 *********** 2025-05-13 20:12:08.557369 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-05-13 20:12:08.557395 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-05-13 20:12:08.557412 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-05-13 20:12:08.557430 | orchestrator | 2025-05-13 20:12:08.557446 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-05-13 20:12:08.557470 | orchestrator | Tuesday 13 May 2025 20:11:02 +0000 (0:00:02.216) 0:00:06.437 *********** 2025-05-13 20:12:08.557480 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-05-13 20:12:08.557490 | orchestrator | 2025-05-13 20:12:08.557499 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-05-13 20:12:08.557509 | orchestrator | Tuesday 13 May 2025 20:11:05 +0000 (0:00:02.129) 0:00:08.566 *********** 2025-05-13 20:12:08.557518 | orchestrator | changed: [testbed-manager] 2025-05-13 20:12:08.557528 | orchestrator | 2025-05-13 20:12:08.557537 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-05-13 20:12:08.557546 | orchestrator | Tuesday 13 May 2025 20:11:06 +0000 (0:00:01.903) 0:00:10.469 *********** 2025-05-13 20:12:08.557556 | orchestrator | changed: [testbed-manager] 2025-05-13 20:12:08.557574 | orchestrator | 2025-05-13 20:12:08.557586 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-05-13 20:12:08.557610 | orchestrator | Tuesday 13 May 2025 20:11:09 +0000 (0:00:02.114) 0:00:12.584 *********** 2025-05-13 20:12:08.557631 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-05-13 20:12:08.557648 | orchestrator | ok: [testbed-manager] 2025-05-13 20:12:08.557663 | orchestrator | 2025-05-13 20:12:08.557678 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-05-13 20:12:08.557693 | orchestrator | Tuesday 13 May 2025 20:11:51 +0000 (0:00:42.033) 0:00:54.617 *********** 2025-05-13 20:12:08.557710 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-05-13 20:12:08.557726 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-05-13 20:12:08.557741 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-05-13 20:12:08.557756 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-05-13 20:12:08.557772 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-05-13 20:12:08.557788 | orchestrator | 2025-05-13 20:12:08.557803 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-05-13 20:12:08.557819 | orchestrator | Tuesday 13 May 2025 20:11:56 +0000 (0:00:05.063) 0:00:59.681 *********** 2025-05-13 20:12:08.557834 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-05-13 20:12:08.557850 | orchestrator | 2025-05-13 20:12:08.557865 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-05-13 20:12:08.557880 | orchestrator | Tuesday 13 May 2025 20:11:57 +0000 (0:00:01.437) 0:01:01.118 *********** 2025-05-13 20:12:08.557895 | orchestrator | skipping: [testbed-manager] 2025-05-13 20:12:08.557912 | orchestrator | 2025-05-13 20:12:08.557927 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-05-13 20:12:08.557942 | orchestrator | Tuesday 13 May 2025 20:11:58 +0000 (0:00:01.108) 0:01:02.227 *********** 2025-05-13 20:12:08.557957 | orchestrator | skipping: [testbed-manager] 2025-05-13 20:12:08.557973 | orchestrator | 2025-05-13 20:12:08.557987 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-05-13 20:12:08.558002 | orchestrator | Tuesday 13 May 2025 20:11:59 +0000 (0:00:01.000) 0:01:03.227 *********** 2025-05-13 20:12:08.558078 | orchestrator | changed: [testbed-manager] 2025-05-13 20:12:08.558095 | orchestrator | 2025-05-13 20:12:08.558110 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-05-13 20:12:08.558126 | orchestrator | Tuesday 13 May 2025 20:12:02 +0000 (0:00:02.422) 0:01:05.650 *********** 2025-05-13 20:12:08.558165 | orchestrator | changed: [testbed-manager] 2025-05-13 20:12:08.558180 | orchestrator | 2025-05-13 20:12:08.558195 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-05-13 20:12:08.558210 | orchestrator | Tuesday 13 May 2025 20:12:03 +0000 (0:00:01.606) 0:01:07.256 *********** 2025-05-13 20:12:08.558226 | orchestrator | changed: [testbed-manager] 2025-05-13 20:12:08.558241 | orchestrator | 2025-05-13 20:12:08.558256 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-05-13 20:12:08.558271 | orchestrator | Tuesday 13 May 2025 20:12:05 +0000 (0:00:01.486) 0:01:08.742 *********** 2025-05-13 20:12:08.558287 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-05-13 20:12:08.558302 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-05-13 20:12:08.558317 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-05-13 20:12:08.558331 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-05-13 20:12:08.558347 | orchestrator | 2025-05-13 20:12:08.558362 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:12:08.558378 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 20:12:08.558393 | orchestrator | 2025-05-13 20:12:08.558409 | orchestrator | 2025-05-13 20:12:08.558424 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:12:08.558453 | orchestrator | Tuesday 13 May 2025 20:12:07 +0000 (0:00:02.733) 0:01:11.476 *********** 2025-05-13 20:12:08.558480 | orchestrator | =============================================================================== 2025-05-13 20:12:08.558496 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.03s 2025-05-13 20:12:08.558511 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 5.06s 2025-05-13 20:12:08.558527 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 2.73s 2025-05-13 20:12:08.558542 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 2.42s 2025-05-13 20:12:08.558557 | orchestrator | osism.services.cephclient : Create required directories ----------------- 2.22s 2025-05-13 20:12:08.558572 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 2.13s 2025-05-13 20:12:08.558588 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 2.11s 2025-05-13 20:12:08.558603 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.90s 2025-05-13 20:12:08.558626 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 1.72s 2025-05-13 20:12:08.558641 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 1.61s 2025-05-13 20:12:08.558656 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 1.49s 2025-05-13 20:12:08.558671 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 1.44s 2025-05-13 20:12:08.558687 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 1.11s 2025-05-13 20:12:08.558702 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 1.00s 2025-05-13 20:12:08.558718 | orchestrator | 2025-05-13 20:12:08 | INFO  | Task 2f5bd649-1555-43fe-aca2-57b1554eeaa5 is in state SUCCESS 2025-05-13 20:12:08.558733 | orchestrator | 2025-05-13 20:12:08 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:12:11.609728 | orchestrator | 2025-05-13 20:12:11 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:12:11.613515 | orchestrator | 2025-05-13 20:12:11 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:12:11.615555 | orchestrator | 2025-05-13 20:12:11 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:12:11.617497 | orchestrator | 2025-05-13 20:12:11 | INFO  | Task 9e8f7d3d-e708-41c6-ab84-1b0da0c43b0e is in state STARTED 2025-05-13 20:12:11.619244 | orchestrator | 2025-05-13 20:12:11 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:12:11.619588 | orchestrator | 2025-05-13 20:12:11 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:12:14.667473 | orchestrator | 2025-05-13 20:12:14 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:12:14.668510 | orchestrator | 2025-05-13 20:12:14 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:12:14.668568 | orchestrator | 2025-05-13 20:12:14 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:12:14.670339 | orchestrator | 2025-05-13 20:12:14 | INFO  | Task 9e8f7d3d-e708-41c6-ab84-1b0da0c43b0e is in state STARTED 2025-05-13 20:12:14.672533 | orchestrator | 2025-05-13 20:12:14 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:12:14.672617 | orchestrator | 2025-05-13 20:12:14 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:12:17.724947 | orchestrator | 2025-05-13 20:12:17 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:12:17.728471 | orchestrator | 2025-05-13 20:12:17 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:12:17.728592 | orchestrator | 2025-05-13 20:12:17 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:12:17.729987 | orchestrator | 2025-05-13 20:12:17 | INFO  | Task 9e8f7d3d-e708-41c6-ab84-1b0da0c43b0e is in state STARTED 2025-05-13 20:12:17.731583 | orchestrator | 2025-05-13 20:12:17 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:12:17.731631 | orchestrator | 2025-05-13 20:12:17 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:12:20.779831 | orchestrator | 2025-05-13 20:12:20 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:12:20.780433 | orchestrator | 2025-05-13 20:12:20 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:12:20.781364 | orchestrator | 2025-05-13 20:12:20 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:12:20.782298 | orchestrator | 2025-05-13 20:12:20 | INFO  | Task 9e8f7d3d-e708-41c6-ab84-1b0da0c43b0e is in state STARTED 2025-05-13 20:12:20.785378 | orchestrator | 2025-05-13 20:12:20 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state STARTED 2025-05-13 20:12:20.785426 | orchestrator | 2025-05-13 20:12:20 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:12:23.833201 | orchestrator | 2025-05-13 20:12:23 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:12:23.835949 | orchestrator | 2025-05-13 20:12:23 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:12:23.839842 | orchestrator | 2025-05-13 20:12:23 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:12:23.844241 | orchestrator | 2025-05-13 20:12:23 | INFO  | Task 9e8f7d3d-e708-41c6-ab84-1b0da0c43b0e is in state STARTED 2025-05-13 20:12:23.847258 | orchestrator | 2025-05-13 20:12:23 | INFO  | Task 4cd30c61-4d2f-4f04-90fe-9d599c256198 is in state SUCCESS 2025-05-13 20:12:23.848460 | orchestrator | 2025-05-13 20:12:23.848482 | orchestrator | 2025-05-13 20:12:23.848491 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:12:23.848499 | orchestrator | 2025-05-13 20:12:23.848516 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:12:23.848524 | orchestrator | Tuesday 13 May 2025 20:10:19 +0000 (0:00:00.233) 0:00:00.233 *********** 2025-05-13 20:12:23.848531 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:12:23.848539 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:12:23.848546 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:12:23.848553 | orchestrator | 2025-05-13 20:12:23.848561 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:12:23.848568 | orchestrator | Tuesday 13 May 2025 20:10:19 +0000 (0:00:00.235) 0:00:00.468 *********** 2025-05-13 20:12:23.848575 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-05-13 20:12:23.848583 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-05-13 20:12:23.848590 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-05-13 20:12:23.848597 | orchestrator | 2025-05-13 20:12:23.848603 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-05-13 20:12:23.848610 | orchestrator | 2025-05-13 20:12:23.848617 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-13 20:12:23.848624 | orchestrator | Tuesday 13 May 2025 20:10:19 +0000 (0:00:00.357) 0:00:00.825 *********** 2025-05-13 20:12:23.848631 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:12:23.848639 | orchestrator | 2025-05-13 20:12:23.848646 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-05-13 20:12:23.848653 | orchestrator | Tuesday 13 May 2025 20:10:20 +0000 (0:00:00.474) 0:00:01.300 *********** 2025-05-13 20:12:23.848678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-13 20:12:23.848866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-13 20:12:23.848882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-13 20:12:23.848889 | orchestrator | 2025-05-13 20:12:23.848897 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-05-13 20:12:23.848903 | orchestrator | Tuesday 13 May 2025 20:10:21 +0000 (0:00:01.561) 0:00:02.862 *********** 2025-05-13 20:12:23.848910 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:12:23.848917 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:12:23.848923 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:12:23.848930 | orchestrator | 2025-05-13 20:12:23.848936 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-13 20:12:23.848943 | orchestrator | Tuesday 13 May 2025 20:10:22 +0000 (0:00:00.348) 0:00:03.211 *********** 2025-05-13 20:12:23.848950 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-13 20:12:23.848961 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-05-13 20:12:23.848972 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-05-13 20:12:23.848980 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-05-13 20:12:23.848986 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-05-13 20:12:23.848993 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-05-13 20:12:23.849000 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-05-13 20:12:23.849007 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-05-13 20:12:23.849014 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-13 20:12:23.849027 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-05-13 20:12:23.849034 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-05-13 20:12:23.849041 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-05-13 20:12:23.849048 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-05-13 20:12:23.849055 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-05-13 20:12:23.849062 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-05-13 20:12:23.849069 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-05-13 20:12:23.849076 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-13 20:12:23.849082 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-05-13 20:12:23.849090 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-05-13 20:12:23.849097 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-05-13 20:12:23.849104 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-05-13 20:12:23.849111 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-05-13 20:12:23.849143 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-05-13 20:12:23.849152 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-05-13 20:12:23.849159 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-05-13 20:12:23.849166 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-05-13 20:12:23.849173 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-05-13 20:12:23.849180 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-05-13 20:12:23.849187 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-05-13 20:12:23.849194 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-05-13 20:12:23.849201 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-05-13 20:12:23.849208 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-05-13 20:12:23.849215 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-05-13 20:12:23.849222 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-05-13 20:12:23.849229 | orchestrator | 2025-05-13 20:12:23.849236 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-13 20:12:23.849243 | orchestrator | Tuesday 13 May 2025 20:10:22 +0000 (0:00:00.636) 0:00:03.847 *********** 2025-05-13 20:12:23.849250 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:12:23.849257 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:12:23.849269 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:12:23.849276 | orchestrator | 2025-05-13 20:12:23.849283 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-13 20:12:23.849290 | orchestrator | Tuesday 13 May 2025 20:10:23 +0000 (0:00:00.258) 0:00:04.106 *********** 2025-05-13 20:12:23.849297 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:12:23.849304 | orchestrator | 2025-05-13 20:12:23.849315 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-13 20:12:23.849326 | orchestrator | Tuesday 13 May 2025 20:10:23 +0000 (0:00:00.113) 0:00:04.219 *********** 2025-05-13 20:12:23.849333 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:12:23.849340 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:12:23.849347 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:12:23.849354 | orchestrator | 2025-05-13 20:12:23.849361 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-13 20:12:23.849374 | orchestrator | Tuesday 13 May 2025 20:10:23 +0000 (0:00:00.354) 0:00:04.574 *********** 2025-05-13 20:12:23.849381 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:12:23.849389 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:12:23.849396 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:12:23.849403 | orchestrator | 2025-05-13 20:12:23.849410 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-13 20:12:23.849418 | orchestrator | Tuesday 13 May 2025 20:10:23 +0000 (0:00:00.240) 0:00:04.815 *********** 2025-05-13 20:12:23.849424 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:12:23.849432 | orchestrator | 2025-05-13 20:12:23.849439 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-13 20:12:23.849446 | orchestrator | Tuesday 13 May 2025 20:10:24 +0000 (0:00:00.110) 0:00:04.925 *********** 2025-05-13 20:12:23.849452 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:12:23.849459 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:12:23.849466 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:12:23.849473 | orchestrator | 2025-05-13 20:12:23.849480 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-13 20:12:23.849487 | orchestrator | Tuesday 13 May 2025 20:10:24 +0000 (0:00:00.272) 0:00:05.198 *********** 2025-05-13 20:12:23.849495 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:12:23.849502 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:12:23.849510 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:12:23.849518 | orchestrator | 2025-05-13 20:12:23.849525 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-13 20:12:23.849533 | orchestrator | Tuesday 13 May 2025 20:10:24 +0000 (0:00:00.310) 0:00:05.509 *********** 2025-05-13 20:12:23.849540 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:12:23.849548 | orchestrator | 2025-05-13 20:12:23.849555 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-13 20:12:23.849563 | orchestrator | Tuesday 13 May 2025 20:10:24 +0000 (0:00:00.308) 0:00:05.817 *********** 2025-05-13 20:12:23.849570 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:12:23.849578 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:12:23.849586 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:12:23.849593 | orchestrator | 2025-05-13 20:12:23.849601 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-13 20:12:23.849609 | orchestrator | Tuesday 13 May 2025 20:10:25 +0000 (0:00:00.289) 0:00:06.107 *********** 2025-05-13 20:12:23.849616 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:12:23.849624 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:12:23.849631 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:12:23.849639 | orchestrator | 2025-05-13 20:12:23.849646 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-13 20:12:23.849654 | orchestrator | Tuesday 13 May 2025 20:10:25 +0000 (0:00:00.364) 0:00:06.471 *********** 2025-05-13 20:12:23.849662 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:12:23.849669 | orchestrator | 2025-05-13 20:12:23.849676 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-13 20:12:23.849688 | orchestrator | Tuesday 13 May 2025 20:10:25 +0000 (0:00:00.112) 0:00:06.584 *********** 2025-05-13 20:12:23.849696 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:12:23.849703 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:12:23.849711 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:12:23.849718 | orchestrator | 2025-05-13 20:12:23.849726 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-13 20:12:23.849733 | orchestrator | Tuesday 13 May 2025 20:10:26 +0000 (0:00:00.302) 0:00:06.887 *********** 2025-05-13 20:12:23.849741 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:12:23.849749 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:12:23.849757 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:12:23.849764 | orchestrator | 2025-05-13 20:12:23.849771 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-13 20:12:23.849779 | orchestrator | Tuesday 13 May 2025 20:10:26 +0000 (0:00:00.513) 0:00:07.401 *********** 2025-05-13 20:12:23.849787 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:12:23.849794 | orchestrator | 2025-05-13 20:12:23.849802 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-13 20:12:23.849809 | orchestrator | Tuesday 13 May 2025 20:10:26 +0000 (0:00:00.125) 0:00:07.526 *********** 2025-05-13 20:12:23.849817 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:12:23.849824 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:12:23.849832 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:12:23.849839 | orchestrator | 2025-05-13 20:12:23.849847 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-13 20:12:23.849854 | orchestrator | Tuesday 13 May 2025 20:10:26 +0000 (0:00:00.325) 0:00:07.852 *********** 2025-05-13 20:12:23.849860 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:12:23.849867 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:12:23.849873 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:12:23.849880 | orchestrator | 2025-05-13 20:12:23.849887 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-13 20:12:23.849894 | orchestrator | Tuesday 13 May 2025 20:10:27 +0000 (0:00:00.328) 0:00:08.181 *********** 2025-05-13 20:12:23.849901 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:12:23.849908 | orchestrator | 2025-05-13 20:12:23.849915 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-13 20:12:23.849922 | orchestrator | Tuesday 13 May 2025 20:10:27 +0000 (0:00:00.134) 0:00:08.316 *********** 2025-05-13 20:12:23.849929 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:12:23.849936 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:12:23.849943 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:12:23.849950 | orchestrator | 2025-05-13 20:12:23.849957 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-13 20:12:23.849970 | orchestrator | Tuesday 13 May 2025 20:10:27 +0000 (0:00:00.469) 0:00:08.786 *********** 2025-05-13 20:12:23.849977 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:12:23.849984 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:12:23.849994 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:12:23.850002 | orchestrator | 2025-05-13 20:12:23.850009 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-13 20:12:23.850045 | orchestrator | Tuesday 13 May 2025 20:10:28 +0000 (0:00:00.324) 0:00:09.110 *********** 2025-05-13 20:12:23.850052 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:12:23.850060 | orchestrator | 2025-05-13 20:12:23.850067 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-13 20:12:23.850074 | orchestrator | Tuesday 13 May 2025 20:10:28 +0000 (0:00:00.118) 0:00:09.229 *********** 2025-05-13 20:12:23.850081 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:12:23.850088 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:12:23.850095 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:12:23.850103 | orchestrator | 2025-05-13 20:12:23.850110 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-13 20:12:23.850133 | orchestrator | Tuesday 13 May 2025 20:10:28 +0000 (0:00:00.459) 0:00:09.689 *********** 2025-05-13 20:12:23.850142 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:12:23.850149 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:12:23.850156 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:12:23.850163 | orchestrator | 2025-05-13 20:12:23.850171 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-13 20:12:23.850178 | orchestrator | Tuesday 13 May 2025 20:10:29 +0000 (0:00:00.569) 0:00:10.259 *********** 2025-05-13 20:12:23.850185 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:12:23.850192 | orchestrator | 2025-05-13 20:12:23.850199 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-13 20:12:23.850206 | orchestrator | Tuesday 13 May 2025 20:10:29 +0000 (0:00:00.390) 0:00:10.650 *********** 2025-05-13 20:12:23.850213 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:12:23.850221 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:12:23.850228 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:12:23.850235 | orchestrator | 2025-05-13 20:12:23.850242 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-13 20:12:23.850249 | orchestrator | Tuesday 13 May 2025 20:10:30 +0000 (0:00:00.604) 0:00:11.254 *********** 2025-05-13 20:12:23.850257 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:12:23.850264 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:12:23.850271 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:12:23.850278 | orchestrator | 2025-05-13 20:12:23.850285 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-13 20:12:23.850292 | orchestrator | Tuesday 13 May 2025 20:10:30 +0000 (0:00:00.479) 0:00:11.734 *********** 2025-05-13 20:12:23.850299 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:12:23.850306 | orchestrator | 2025-05-13 20:12:23.850313 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-13 20:12:23.850320 | orchestrator | Tuesday 13 May 2025 20:10:31 +0000 (0:00:00.370) 0:00:12.105 *********** 2025-05-13 20:12:23.850327 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:12:23.850334 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:12:23.850341 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:12:23.850349 | orchestrator | 2025-05-13 20:12:23.850355 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-13 20:12:23.850362 | orchestrator | Tuesday 13 May 2025 20:10:31 +0000 (0:00:00.570) 0:00:12.675 *********** 2025-05-13 20:12:23.850369 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:12:23.850377 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:12:23.850383 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:12:23.850390 | orchestrator | 2025-05-13 20:12:23.850399 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-13 20:12:23.850406 | orchestrator | Tuesday 13 May 2025 20:10:32 +0000 (0:00:00.515) 0:00:13.190 *********** 2025-05-13 20:12:23.850413 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:12:23.850420 | orchestrator | 2025-05-13 20:12:23.850427 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-13 20:12:23.850434 | orchestrator | Tuesday 13 May 2025 20:10:32 +0000 (0:00:00.144) 0:00:13.334 *********** 2025-05-13 20:12:23.850441 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:12:23.850448 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:12:23.850455 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:12:23.850462 | orchestrator | 2025-05-13 20:12:23.850469 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-05-13 20:12:23.850476 | orchestrator | Tuesday 13 May 2025 20:10:32 +0000 (0:00:00.310) 0:00:13.645 *********** 2025-05-13 20:12:23.850483 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:12:23.850490 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:12:23.850497 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:12:23.850504 | orchestrator | 2025-05-13 20:12:23.850511 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-05-13 20:12:23.850518 | orchestrator | Tuesday 13 May 2025 20:10:34 +0000 (0:00:01.656) 0:00:15.301 *********** 2025-05-13 20:12:23.850530 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-13 20:12:23.850538 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-13 20:12:23.850545 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-13 20:12:23.850552 | orchestrator | 2025-05-13 20:12:23.850559 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-05-13 20:12:23.850566 | orchestrator | Tuesday 13 May 2025 20:10:36 +0000 (0:00:02.111) 0:00:17.413 *********** 2025-05-13 20:12:23.850573 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-13 20:12:23.850581 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-13 20:12:23.850588 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-13 20:12:23.850595 | orchestrator | 2025-05-13 20:12:23.850602 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-05-13 20:12:23.850614 | orchestrator | Tuesday 13 May 2025 20:10:39 +0000 (0:00:02.760) 0:00:20.173 *********** 2025-05-13 20:12:23.850625 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-13 20:12:23.850632 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-13 20:12:23.850639 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-13 20:12:23.850647 | orchestrator | 2025-05-13 20:12:23.850654 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-05-13 20:12:23.850661 | orchestrator | Tuesday 13 May 2025 20:10:41 +0000 (0:00:01.881) 0:00:22.054 *********** 2025-05-13 20:12:23.850668 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:12:23.850675 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:12:23.850683 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:12:23.850690 | orchestrator | 2025-05-13 20:12:23.850697 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-05-13 20:12:23.850702 | orchestrator | Tuesday 13 May 2025 20:10:41 +0000 (0:00:00.362) 0:00:22.417 *********** 2025-05-13 20:12:23.850708 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:12:23.850714 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:12:23.850720 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:12:23.850725 | orchestrator | 2025-05-13 20:12:23.850731 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-13 20:12:23.850737 | orchestrator | Tuesday 13 May 2025 20:10:41 +0000 (0:00:00.286) 0:00:22.703 *********** 2025-05-13 20:12:23.850743 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:12:23.850749 | orchestrator | 2025-05-13 20:12:23.850755 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-05-13 20:12:23.850761 | orchestrator | Tuesday 13 May 2025 20:10:42 +0000 (0:00:00.766) 0:00:23.469 *********** 2025-05-13 20:12:23.850770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-13 20:12:23.850800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-13 20:12:23.850810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-13 20:12:23.850821 | orchestrator | 2025-05-13 20:12:23.850829 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-05-13 20:12:23.850836 | orchestrator | Tuesday 13 May 2025 20:10:44 +0000 (0:00:01.785) 0:00:25.255 *********** 2025-05-13 20:12:23.850852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-13 20:12:23.850863 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:12:23.850876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-13 20:12:23.850884 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:12:23.850893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-13 20:12:23.850904 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:12:23.850912 | orchestrator | 2025-05-13 20:12:23.850919 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-05-13 20:12:23.850926 | orchestrator | Tuesday 13 May 2025 20:10:45 +0000 (0:00:00.793) 0:00:26.048 *********** 2025-05-13 20:12:23.850943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-13 20:12:23.850952 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:12:23.850961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-13 20:12:23.850973 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:12:23.850990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-13 20:12:23.850998 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:12:23.851005 | orchestrator | 2025-05-13 20:12:23.851012 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-05-13 20:12:23.851020 | orchestrator | Tuesday 13 May 2025 20:10:46 +0000 (0:00:01.203) 0:00:27.252 *********** 2025-05-13 20:12:23.851028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-13 20:12:23.851047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-13 20:12:23.851056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-13 20:12:23.851067 | orchestrator | 2025-05-13 20:12:23.851075 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-13 20:12:23.851082 | orchestrator | Tuesday 13 May 2025 20:10:47 +0000 (0:00:01.397) 0:00:28.649 *********** 2025-05-13 20:12:23.851088 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:12:23.851096 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:12:23.851103 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:12:23.851110 | orchestrator | 2025-05-13 20:12:23.851117 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-13 20:12:23.851135 | orchestrator | Tuesday 13 May 2025 20:10:48 +0000 (0:00:00.445) 0:00:29.094 *********** 2025-05-13 20:12:23.851142 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:12:23.851150 | orchestrator | 2025-05-13 20:12:23.851156 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-05-13 20:12:23.851166 | orchestrator | Tuesday 13 May 2025 20:10:48 +0000 (0:00:00.726) 0:00:29.821 *********** 2025-05-13 20:12:23.851171 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:12:23.851177 | orchestrator | 2025-05-13 20:12:23.851185 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-05-13 20:12:23.851191 | orchestrator | Tuesday 13 May 2025 20:10:51 +0000 (0:00:02.160) 0:00:31.981 *********** 2025-05-13 20:12:23.851197 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:12:23.851204 | orchestrator | 2025-05-13 20:12:23.851210 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-05-13 20:12:23.851216 | orchestrator | Tuesday 13 May 2025 20:10:53 +0000 (0:00:02.194) 0:00:34.175 *********** 2025-05-13 20:12:23.851222 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:12:23.851229 | orchestrator | 2025-05-13 20:12:23.851236 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-13 20:12:23.851241 | orchestrator | Tuesday 13 May 2025 20:11:07 +0000 (0:00:14.430) 0:00:48.605 *********** 2025-05-13 20:12:23.851248 | orchestrator | 2025-05-13 20:12:23.851259 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-13 20:12:23.851266 | orchestrator | Tuesday 13 May 2025 20:11:07 +0000 (0:00:00.066) 0:00:48.672 *********** 2025-05-13 20:12:23.851273 | orchestrator | 2025-05-13 20:12:23.851280 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-13 20:12:23.851286 | orchestrator | Tuesday 13 May 2025 20:11:07 +0000 (0:00:00.063) 0:00:48.736 *********** 2025-05-13 20:12:23.851292 | orchestrator | 2025-05-13 20:12:23.851299 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-05-13 20:12:23.851306 | orchestrator | Tuesday 13 May 2025 20:11:07 +0000 (0:00:00.065) 0:00:48.801 *********** 2025-05-13 20:12:23.851313 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:12:23.851320 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:12:23.851328 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:12:23.851335 | orchestrator | 2025-05-13 20:12:23.851342 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:12:23.851349 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-05-13 20:12:23.851356 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-05-13 20:12:23.851364 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-05-13 20:12:23.851371 | orchestrator | 2025-05-13 20:12:23.851378 | orchestrator | 2025-05-13 20:12:23.851385 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:12:23.851392 | orchestrator | Tuesday 13 May 2025 20:12:20 +0000 (0:01:13.057) 0:02:01.859 *********** 2025-05-13 20:12:23.851399 | orchestrator | =============================================================================== 2025-05-13 20:12:23.851406 | orchestrator | horizon : Restart horizon container ------------------------------------ 73.06s 2025-05-13 20:12:23.851413 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.43s 2025-05-13 20:12:23.851420 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.76s 2025-05-13 20:12:23.851427 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.19s 2025-05-13 20:12:23.851434 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.16s 2025-05-13 20:12:23.851440 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.11s 2025-05-13 20:12:23.851447 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.88s 2025-05-13 20:12:23.851454 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.79s 2025-05-13 20:12:23.851461 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.66s 2025-05-13 20:12:23.851468 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.56s 2025-05-13 20:12:23.851475 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.40s 2025-05-13 20:12:23.851482 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.20s 2025-05-13 20:12:23.851489 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.79s 2025-05-13 20:12:23.851496 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.77s 2025-05-13 20:12:23.851503 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.73s 2025-05-13 20:12:23.851509 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.64s 2025-05-13 20:12:23.851516 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.60s 2025-05-13 20:12:23.851526 | orchestrator | horizon : Update policy file name --------------------------------------- 0.57s 2025-05-13 20:12:23.851533 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.57s 2025-05-13 20:12:23.851547 | orchestrator | horizon : Update policy file name --------------------------------------- 0.52s 2025-05-13 20:12:23.851554 | orchestrator | 2025-05-13 20:12:23 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:12:26.914774 | orchestrator | 2025-05-13 20:12:26 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:12:26.916312 | orchestrator | 2025-05-13 20:12:26 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:12:26.917717 | orchestrator | 2025-05-13 20:12:26 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:12:26.919540 | orchestrator | 2025-05-13 20:12:26 | INFO  | Task 9e8f7d3d-e708-41c6-ab84-1b0da0c43b0e is in state STARTED 2025-05-13 20:12:26.919588 | orchestrator | 2025-05-13 20:12:26 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:12:29.957980 | orchestrator | 2025-05-13 20:12:29 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:12:29.958188 | orchestrator | 2025-05-13 20:12:29 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:12:29.959314 | orchestrator | 2025-05-13 20:12:29 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:12:29.960545 | orchestrator | 2025-05-13 20:12:29 | INFO  | Task 9e8f7d3d-e708-41c6-ab84-1b0da0c43b0e is in state STARTED 2025-05-13 20:12:29.960586 | orchestrator | 2025-05-13 20:12:29 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:12:33.026257 | orchestrator | 2025-05-13 20:12:33 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:12:33.031702 | orchestrator | 2025-05-13 20:12:33 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:12:33.034479 | orchestrator | 2025-05-13 20:12:33 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:12:33.035580 | orchestrator | 2025-05-13 20:12:33 | INFO  | Task 9e8f7d3d-e708-41c6-ab84-1b0da0c43b0e is in state STARTED 2025-05-13 20:12:33.035609 | orchestrator | 2025-05-13 20:12:33 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:12:36.077754 | orchestrator | 2025-05-13 20:12:36 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:12:36.078434 | orchestrator | 2025-05-13 20:12:36 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:12:36.079899 | orchestrator | 2025-05-13 20:12:36 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:12:36.081485 | orchestrator | 2025-05-13 20:12:36 | INFO  | Task 9e8f7d3d-e708-41c6-ab84-1b0da0c43b0e is in state STARTED 2025-05-13 20:12:36.081516 | orchestrator | 2025-05-13 20:12:36 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:12:39.122819 | orchestrator | 2025-05-13 20:12:39 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:12:39.122909 | orchestrator | 2025-05-13 20:12:39 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:12:39.123591 | orchestrator | 2025-05-13 20:12:39 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:12:39.124381 | orchestrator | 2025-05-13 20:12:39 | INFO  | Task 9e8f7d3d-e708-41c6-ab84-1b0da0c43b0e is in state STARTED 2025-05-13 20:12:39.124401 | orchestrator | 2025-05-13 20:12:39 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:12:42.172484 | orchestrator | 2025-05-13 20:12:42 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:12:42.172587 | orchestrator | 2025-05-13 20:12:42 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:12:42.172640 | orchestrator | 2025-05-13 20:12:42 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:12:42.173302 | orchestrator | 2025-05-13 20:12:42 | INFO  | Task 9e8f7d3d-e708-41c6-ab84-1b0da0c43b0e is in state STARTED 2025-05-13 20:12:42.173335 | orchestrator | 2025-05-13 20:12:42 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:12:45.201082 | orchestrator | 2025-05-13 20:12:45 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:12:45.201416 | orchestrator | 2025-05-13 20:12:45 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:12:45.202009 | orchestrator | 2025-05-13 20:12:45 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:12:45.202555 | orchestrator | 2025-05-13 20:12:45 | INFO  | Task 9e8f7d3d-e708-41c6-ab84-1b0da0c43b0e is in state STARTED 2025-05-13 20:12:45.202578 | orchestrator | 2025-05-13 20:12:45 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:12:48.232372 | orchestrator | 2025-05-13 20:12:48 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:12:48.232581 | orchestrator | 2025-05-13 20:12:48 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:12:48.233048 | orchestrator | 2025-05-13 20:12:48 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:12:48.233686 | orchestrator | 2025-05-13 20:12:48 | INFO  | Task 9e8f7d3d-e708-41c6-ab84-1b0da0c43b0e is in state STARTED 2025-05-13 20:12:48.233728 | orchestrator | 2025-05-13 20:12:48 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:12:51.269204 | orchestrator | 2025-05-13 20:12:51 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:12:51.269870 | orchestrator | 2025-05-13 20:12:51 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:12:51.277286 | orchestrator | 2025-05-13 20:12:51 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:12:51.277368 | orchestrator | 2025-05-13 20:12:51 | INFO  | Task 9e8f7d3d-e708-41c6-ab84-1b0da0c43b0e is in state SUCCESS 2025-05-13 20:12:51.280811 | orchestrator | 2025-05-13 20:12:51 | INFO  | Task 88a1cc8b-c25b-4fec-a1d0-fdf82b628080 is in state STARTED 2025-05-13 20:12:51.283662 | orchestrator | 2025-05-13 20:12:51 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:12:51.283723 | orchestrator | 2025-05-13 20:12:51 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:12:54.328761 | orchestrator | 2025-05-13 20:12:54 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:12:54.332283 | orchestrator | 2025-05-13 20:12:54 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:12:54.332887 | orchestrator | 2025-05-13 20:12:54 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:12:54.333943 | orchestrator | 2025-05-13 20:12:54 | INFO  | Task 88a1cc8b-c25b-4fec-a1d0-fdf82b628080 is in state STARTED 2025-05-13 20:12:54.334527 | orchestrator | 2025-05-13 20:12:54 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:12:54.334706 | orchestrator | 2025-05-13 20:12:54 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:12:57.394879 | orchestrator | 2025-05-13 20:12:57 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:12:57.395803 | orchestrator | 2025-05-13 20:12:57 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:12:57.401045 | orchestrator | 2025-05-13 20:12:57 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:12:57.402475 | orchestrator | 2025-05-13 20:12:57 | INFO  | Task 88a1cc8b-c25b-4fec-a1d0-fdf82b628080 is in state STARTED 2025-05-13 20:12:57.404217 | orchestrator | 2025-05-13 20:12:57 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:12:57.404294 | orchestrator | 2025-05-13 20:12:57 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:13:00.461801 | orchestrator | 2025-05-13 20:13:00 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:13:00.465381 | orchestrator | 2025-05-13 20:13:00 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:13:00.468080 | orchestrator | 2025-05-13 20:13:00 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:13:00.474690 | orchestrator | 2025-05-13 20:13:00 | INFO  | Task 88a1cc8b-c25b-4fec-a1d0-fdf82b628080 is in state STARTED 2025-05-13 20:13:00.474758 | orchestrator | 2025-05-13 20:13:00 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:13:00.474772 | orchestrator | 2025-05-13 20:13:00 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:13:03.520470 | orchestrator | 2025-05-13 20:13:03 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:13:03.520559 | orchestrator | 2025-05-13 20:13:03 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:13:03.520573 | orchestrator | 2025-05-13 20:13:03 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:13:03.520583 | orchestrator | 2025-05-13 20:13:03 | INFO  | Task 88a1cc8b-c25b-4fec-a1d0-fdf82b628080 is in state STARTED 2025-05-13 20:13:03.520593 | orchestrator | 2025-05-13 20:13:03 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:13:03.520602 | orchestrator | 2025-05-13 20:13:03 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:13:06.567170 | orchestrator | 2025-05-13 20:13:06 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:13:06.568626 | orchestrator | 2025-05-13 20:13:06 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:13:06.570623 | orchestrator | 2025-05-13 20:13:06 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:13:06.572492 | orchestrator | 2025-05-13 20:13:06 | INFO  | Task 88a1cc8b-c25b-4fec-a1d0-fdf82b628080 is in state STARTED 2025-05-13 20:13:06.574827 | orchestrator | 2025-05-13 20:13:06 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:13:06.574908 | orchestrator | 2025-05-13 20:13:06 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:13:09.633001 | orchestrator | 2025-05-13 20:13:09 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:13:09.637443 | orchestrator | 2025-05-13 20:13:09 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:13:09.640112 | orchestrator | 2025-05-13 20:13:09 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:13:09.642058 | orchestrator | 2025-05-13 20:13:09 | INFO  | Task 88a1cc8b-c25b-4fec-a1d0-fdf82b628080 is in state STARTED 2025-05-13 20:13:09.643945 | orchestrator | 2025-05-13 20:13:09 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:13:09.644187 | orchestrator | 2025-05-13 20:13:09 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:13:12.697056 | orchestrator | 2025-05-13 20:13:12 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:13:12.698588 | orchestrator | 2025-05-13 20:13:12 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:13:12.700616 | orchestrator | 2025-05-13 20:13:12 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:13:12.702641 | orchestrator | 2025-05-13 20:13:12 | INFO  | Task 88a1cc8b-c25b-4fec-a1d0-fdf82b628080 is in state STARTED 2025-05-13 20:13:12.704245 | orchestrator | 2025-05-13 20:13:12 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:13:12.704280 | orchestrator | 2025-05-13 20:13:12 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:13:15.762420 | orchestrator | 2025-05-13 20:13:15 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:13:15.764154 | orchestrator | 2025-05-13 20:13:15 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state STARTED 2025-05-13 20:13:15.766513 | orchestrator | 2025-05-13 20:13:15 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:13:15.768428 | orchestrator | 2025-05-13 20:13:15 | INFO  | Task 88a1cc8b-c25b-4fec-a1d0-fdf82b628080 is in state STARTED 2025-05-13 20:13:15.770619 | orchestrator | 2025-05-13 20:13:15 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:13:15.770651 | orchestrator | 2025-05-13 20:13:15 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:13:18.829417 | orchestrator | 2025-05-13 20:13:18 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:13:18.834798 | orchestrator | 2025-05-13 20:13:18 | INFO  | Task bd2eb8da-0689-431d-813a-1634d549c4f3 is in state SUCCESS 2025-05-13 20:13:18.837966 | orchestrator | 2025-05-13 20:13:18.838522 | orchestrator | 2025-05-13 20:13:18.838558 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:13:18.838570 | orchestrator | 2025-05-13 20:13:18.838582 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:13:18.838593 | orchestrator | Tuesday 13 May 2025 20:12:12 +0000 (0:00:00.186) 0:00:00.186 *********** 2025-05-13 20:13:18.838604 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:13:18.838616 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:13:18.838627 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:13:18.838638 | orchestrator | 2025-05-13 20:13:18.838648 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:13:18.838659 | orchestrator | Tuesday 13 May 2025 20:12:13 +0000 (0:00:00.320) 0:00:00.506 *********** 2025-05-13 20:13:18.838670 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-13 20:13:18.838681 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-13 20:13:18.838691 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-13 20:13:18.838702 | orchestrator | 2025-05-13 20:13:18.838712 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-05-13 20:13:18.838723 | orchestrator | 2025-05-13 20:13:18.838734 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-05-13 20:13:18.838744 | orchestrator | Tuesday 13 May 2025 20:12:13 +0000 (0:00:00.662) 0:00:01.169 *********** 2025-05-13 20:13:18.838755 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:13:18.838765 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:13:18.838776 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:13:18.838787 | orchestrator | 2025-05-13 20:13:18.838797 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:13:18.838809 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:13:18.838821 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:13:18.838896 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:13:18.838909 | orchestrator | 2025-05-13 20:13:18.838920 | orchestrator | 2025-05-13 20:13:18.838930 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:13:18.838940 | orchestrator | Tuesday 13 May 2025 20:12:49 +0000 (0:00:35.815) 0:00:36.986 *********** 2025-05-13 20:13:18.838951 | orchestrator | =============================================================================== 2025-05-13 20:13:18.838961 | orchestrator | Waiting for Keystone public port to be UP ------------------------------ 35.82s 2025-05-13 20:13:18.838972 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.66s 2025-05-13 20:13:18.838982 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-05-13 20:13:18.838992 | orchestrator | 2025-05-13 20:13:18.839003 | orchestrator | 2025-05-13 20:13:18.839013 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:13:18.839024 | orchestrator | 2025-05-13 20:13:18.839034 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:13:18.839046 | orchestrator | Tuesday 13 May 2025 20:10:19 +0000 (0:00:00.242) 0:00:00.242 *********** 2025-05-13 20:13:18.839057 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:13:18.839067 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:13:18.839104 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:13:18.839115 | orchestrator | 2025-05-13 20:13:18.839126 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:13:18.839136 | orchestrator | Tuesday 13 May 2025 20:10:19 +0000 (0:00:00.234) 0:00:00.476 *********** 2025-05-13 20:13:18.839146 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-13 20:13:18.839157 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-13 20:13:18.839168 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-13 20:13:18.839178 | orchestrator | 2025-05-13 20:13:18.839188 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-05-13 20:13:18.839199 | orchestrator | 2025-05-13 20:13:18.839209 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-13 20:13:18.839220 | orchestrator | Tuesday 13 May 2025 20:10:19 +0000 (0:00:00.324) 0:00:00.800 *********** 2025-05-13 20:13:18.839230 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:13:18.839241 | orchestrator | 2025-05-13 20:13:18.839252 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-05-13 20:13:18.839263 | orchestrator | Tuesday 13 May 2025 20:10:20 +0000 (0:00:00.485) 0:00:01.286 *********** 2025-05-13 20:13:18.839325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 20:13:18.839345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 20:13:18.839374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 20:13:18.839388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-13 20:13:18.839400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-13 20:13:18.839412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-13 20:13:18.839455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 20:13:18.839476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 20:13:18.839493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 20:13:18.839504 | orchestrator | 2025-05-13 20:13:18.839515 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-05-13 20:13:18.839526 | orchestrator | Tuesday 13 May 2025 20:10:22 +0000 (0:00:01.971) 0:00:03.258 *********** 2025-05-13 20:13:18.839537 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-05-13 20:13:18.839548 | orchestrator | 2025-05-13 20:13:18.839558 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-05-13 20:13:18.839570 | orchestrator | Tuesday 13 May 2025 20:10:23 +0000 (0:00:00.782) 0:00:04.041 *********** 2025-05-13 20:13:18.839580 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:13:18.839591 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:13:18.839602 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:13:18.839613 | orchestrator | 2025-05-13 20:13:18.839623 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-05-13 20:13:18.839634 | orchestrator | Tuesday 13 May 2025 20:10:23 +0000 (0:00:00.355) 0:00:04.397 *********** 2025-05-13 20:13:18.839645 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 20:13:18.839657 | orchestrator | 2025-05-13 20:13:18.839667 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-13 20:13:18.839678 | orchestrator | Tuesday 13 May 2025 20:10:24 +0000 (0:00:00.657) 0:00:05.054 *********** 2025-05-13 20:13:18.839689 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:13:18.839700 | orchestrator | 2025-05-13 20:13:18.839710 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-05-13 20:13:18.839721 | orchestrator | Tuesday 13 May 2025 20:10:24 +0000 (0:00:00.511) 0:00:05.565 *********** 2025-05-13 20:13:18.839732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 20:13:18.839765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 20:13:18.839794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 20:13:18.839816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-13 20:13:18.839835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-13 20:13:18.839854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-13 20:13:18.839897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 20:13:18.839917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 20:13:18.839937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 20:13:18.839955 | orchestrator | 2025-05-13 20:13:18.839975 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-05-13 20:13:18.839992 | orchestrator | Tuesday 13 May 2025 20:10:28 +0000 (0:00:03.473) 0:00:09.039 *********** 2025-05-13 20:13:18.840011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-13 20:13:18.840024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 20:13:18.840099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-13 20:13:18.840113 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:13:18.840136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-13 20:13:18.840148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 20:13:18.840165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-13 20:13:18.840176 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:13:18.840188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-13 20:13:18.840207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 20:13:18.840224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-13 20:13:18.840235 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:13:18.840246 | orchestrator | 2025-05-13 20:13:18.840257 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-05-13 20:13:18.840268 | orchestrator | Tuesday 13 May 2025 20:10:29 +0000 (0:00:01.030) 0:00:10.070 *********** 2025-05-13 20:13:18.840280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-13 20:13:18.840297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 20:13:18.840308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-13 20:13:18.840319 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:13:18.840344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-13 20:13:18.840363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 20:13:18.840375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-13 20:13:18.840386 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:13:18.840403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-13 20:13:18.840415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 20:13:18.840433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-13 20:13:18.840445 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:13:18.840455 | orchestrator | 2025-05-13 20:13:18.840466 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-05-13 20:13:18.840477 | orchestrator | Tuesday 13 May 2025 20:10:30 +0000 (0:00:01.577) 0:00:11.647 *********** 2025-05-13 20:13:18.840495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 20:13:18.840509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 20:13:18.840527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 20:13:18.840546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-13 20:13:18.840557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-13 20:13:18.840569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-13 20:13:18.840586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 20:13:18.840598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 20:13:18.840615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 20:13:18.840626 | orchestrator | 2025-05-13 20:13:18.840637 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-05-13 20:13:18.840648 | orchestrator | Tuesday 13 May 2025 20:10:34 +0000 (0:00:04.247) 0:00:15.895 *********** 2025-05-13 20:13:18.840660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 20:13:18.840678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 20:13:18.840697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 20:13:18.840709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 20:13:18.840726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 20:13:18.840745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 20:13:18.840756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 20:13:18.840767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 20:13:18.840785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 20:13:18.840796 | orchestrator | 2025-05-13 20:13:18.840807 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-05-13 20:13:18.840818 | orchestrator | Tuesday 13 May 2025 20:10:40 +0000 (0:00:05.994) 0:00:21.889 *********** 2025-05-13 20:13:18.840829 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:13:18.840840 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:13:18.840851 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:13:18.840862 | orchestrator | 2025-05-13 20:13:18.840872 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-05-13 20:13:18.840883 | orchestrator | Tuesday 13 May 2025 20:10:42 +0000 (0:00:01.381) 0:00:23.271 *********** 2025-05-13 20:13:18.840894 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:13:18.840905 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:13:18.840915 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:13:18.840926 | orchestrator | 2025-05-13 20:13:18.840937 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-05-13 20:13:18.840948 | orchestrator | Tuesday 13 May 2025 20:10:43 +0000 (0:00:00.943) 0:00:24.214 *********** 2025-05-13 20:13:18.840958 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:13:18.840969 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:13:18.840986 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:13:18.840997 | orchestrator | 2025-05-13 20:13:18.841008 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-05-13 20:13:18.841023 | orchestrator | Tuesday 13 May 2025 20:10:43 +0000 (0:00:00.507) 0:00:24.722 *********** 2025-05-13 20:13:18.841033 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:13:18.841044 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:13:18.841055 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:13:18.841066 | orchestrator | 2025-05-13 20:13:18.841095 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-05-13 20:13:18.841106 | orchestrator | Tuesday 13 May 2025 20:10:44 +0000 (0:00:00.320) 0:00:25.042 *********** 2025-05-13 20:13:18.841118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 20:13:18.841130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 20:13:18.841149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 20:13:18.841161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 20:13:18.841184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 20:13:18.841196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-13 20:13:18.841208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 20:13:18.841219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 20:13:18.841237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 20:13:18.841249 | orchestrator | 2025-05-13 20:13:18.841259 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-13 20:13:18.841270 | orchestrator | Tuesday 13 May 2025 20:10:46 +0000 (0:00:02.604) 0:00:27.647 *********** 2025-05-13 20:13:18.841281 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:13:18.841292 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:13:18.841302 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:13:18.841319 | orchestrator | 2025-05-13 20:13:18.841330 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-05-13 20:13:18.841340 | orchestrator | Tuesday 13 May 2025 20:10:47 +0000 (0:00:00.354) 0:00:28.001 *********** 2025-05-13 20:13:18.841351 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-13 20:13:18.841362 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-13 20:13:18.841373 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-13 20:13:18.841384 | orchestrator | 2025-05-13 20:13:18.841394 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-05-13 20:13:18.841405 | orchestrator | Tuesday 13 May 2025 20:10:49 +0000 (0:00:02.095) 0:00:30.096 *********** 2025-05-13 20:13:18.841416 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 20:13:18.841427 | orchestrator | 2025-05-13 20:13:18.841438 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-05-13 20:13:18.841449 | orchestrator | Tuesday 13 May 2025 20:10:50 +0000 (0:00:00.906) 0:00:31.003 *********** 2025-05-13 20:13:18.841459 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:13:18.841470 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:13:18.841486 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:13:18.841497 | orchestrator | 2025-05-13 20:13:18.841508 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-05-13 20:13:18.841518 | orchestrator | Tuesday 13 May 2025 20:10:50 +0000 (0:00:00.510) 0:00:31.513 *********** 2025-05-13 20:13:18.841529 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-13 20:13:18.841540 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 20:13:18.841551 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-13 20:13:18.841561 | orchestrator | 2025-05-13 20:13:18.841572 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-05-13 20:13:18.841583 | orchestrator | Tuesday 13 May 2025 20:10:51 +0000 (0:00:01.258) 0:00:32.772 *********** 2025-05-13 20:13:18.841593 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:13:18.841605 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:13:18.841616 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:13:18.841627 | orchestrator | 2025-05-13 20:13:18.841637 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-05-13 20:13:18.841648 | orchestrator | Tuesday 13 May 2025 20:10:52 +0000 (0:00:00.288) 0:00:33.060 *********** 2025-05-13 20:13:18.841659 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-13 20:13:18.841670 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-13 20:13:18.841681 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-13 20:13:18.841691 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-13 20:13:18.841702 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-13 20:13:18.841713 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-13 20:13:18.841724 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-13 20:13:18.841735 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-13 20:13:18.841746 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-13 20:13:18.841756 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-13 20:13:18.841767 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-13 20:13:18.841778 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-13 20:13:18.841802 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-13 20:13:18.841813 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-13 20:13:18.841824 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-13 20:13:18.841835 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-13 20:13:18.841846 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-13 20:13:18.841857 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-13 20:13:18.841874 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-13 20:13:18.841885 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-13 20:13:18.841896 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-13 20:13:18.841906 | orchestrator | 2025-05-13 20:13:18.841917 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-05-13 20:13:18.841928 | orchestrator | Tuesday 13 May 2025 20:11:01 +0000 (0:00:08.966) 0:00:42.027 *********** 2025-05-13 20:13:18.841938 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-13 20:13:18.841949 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-13 20:13:18.841960 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-13 20:13:18.841971 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-13 20:13:18.841982 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-13 20:13:18.841993 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-13 20:13:18.842003 | orchestrator | 2025-05-13 20:13:18.842060 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-05-13 20:13:18.842091 | orchestrator | Tuesday 13 May 2025 20:11:03 +0000 (0:00:02.709) 0:00:44.737 *********** 2025-05-13 20:13:18.842108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 20:13:18.842122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 20:13:18.842151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-13 20:13:18.842163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-13 20:13:18.842175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-13 20:13:18.842191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-13 20:13:18.842202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 20:13:18.842214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 20:13:18.842233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-13 20:13:18.842243 | orchestrator | 2025-05-13 20:13:18.842254 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-13 20:13:18.842265 | orchestrator | Tuesday 13 May 2025 20:11:06 +0000 (0:00:02.421) 0:00:47.158 *********** 2025-05-13 20:13:18.842275 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:13:18.842286 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:13:18.842297 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:13:18.842308 | orchestrator | 2025-05-13 20:13:18.842323 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-05-13 20:13:18.842334 | orchestrator | Tuesday 13 May 2025 20:11:06 +0000 (0:00:00.359) 0:00:47.518 *********** 2025-05-13 20:13:18.842345 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:13:18.842355 | orchestrator | 2025-05-13 20:13:18.842366 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-05-13 20:13:18.842376 | orchestrator | Tuesday 13 May 2025 20:11:08 +0000 (0:00:02.214) 0:00:49.733 *********** 2025-05-13 20:13:18.842387 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:13:18.842397 | orchestrator | 2025-05-13 20:13:18.842407 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-05-13 20:13:18.842418 | orchestrator | Tuesday 13 May 2025 20:11:11 +0000 (0:00:02.890) 0:00:52.623 *********** 2025-05-13 20:13:18.842429 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:13:18.842571 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:13:18.842585 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:13:18.842596 | orchestrator | 2025-05-13 20:13:18.842607 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-05-13 20:13:18.842617 | orchestrator | Tuesday 13 May 2025 20:11:12 +0000 (0:00:00.844) 0:00:53.468 *********** 2025-05-13 20:13:18.842628 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:13:18.842638 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:13:18.842649 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:13:18.842660 | orchestrator | 2025-05-13 20:13:18.842671 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-05-13 20:13:18.842682 | orchestrator | Tuesday 13 May 2025 20:11:12 +0000 (0:00:00.344) 0:00:53.812 *********** 2025-05-13 20:13:18.842693 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:13:18.842704 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:13:18.842714 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:13:18.842725 | orchestrator | 2025-05-13 20:13:18.842736 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-05-13 20:13:18.842746 | orchestrator | Tuesday 13 May 2025 20:11:13 +0000 (0:00:00.348) 0:00:54.160 *********** 2025-05-13 20:13:18.842757 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:13:18.842767 | orchestrator | 2025-05-13 20:13:18.842778 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-05-13 20:13:18.842795 | orchestrator | Tuesday 13 May 2025 20:11:27 +0000 (0:00:13.878) 0:01:08.039 *********** 2025-05-13 20:13:18.842813 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:13:18.842824 | orchestrator | 2025-05-13 20:13:18.842835 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-13 20:13:18.842845 | orchestrator | Tuesday 13 May 2025 20:11:36 +0000 (0:00:09.644) 0:01:17.683 *********** 2025-05-13 20:13:18.842856 | orchestrator | 2025-05-13 20:13:18.842867 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-13 20:13:18.842885 | orchestrator | Tuesday 13 May 2025 20:11:37 +0000 (0:00:00.255) 0:01:17.938 *********** 2025-05-13 20:13:18.842904 | orchestrator | 2025-05-13 20:13:18.842922 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-13 20:13:18.842941 | orchestrator | Tuesday 13 May 2025 20:11:37 +0000 (0:00:00.065) 0:01:18.003 *********** 2025-05-13 20:13:18.842960 | orchestrator | 2025-05-13 20:13:18.842978 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-05-13 20:13:18.842993 | orchestrator | Tuesday 13 May 2025 20:11:37 +0000 (0:00:00.067) 0:01:18.071 *********** 2025-05-13 20:13:18.843004 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:13:18.843015 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:13:18.843026 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:13:18.843036 | orchestrator | 2025-05-13 20:13:18.843047 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-05-13 20:13:18.843058 | orchestrator | Tuesday 13 May 2025 20:12:23 +0000 (0:00:46.671) 0:02:04.742 *********** 2025-05-13 20:13:18.843069 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:13:18.843111 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:13:18.843122 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:13:18.843132 | orchestrator | 2025-05-13 20:13:18.843143 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-05-13 20:13:18.843154 | orchestrator | Tuesday 13 May 2025 20:12:29 +0000 (0:00:05.890) 0:02:10.632 *********** 2025-05-13 20:13:18.843164 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:13:18.843175 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:13:18.843186 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:13:18.843198 | orchestrator | 2025-05-13 20:13:18.843210 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-13 20:13:18.843222 | orchestrator | Tuesday 13 May 2025 20:12:41 +0000 (0:00:11.832) 0:02:22.465 *********** 2025-05-13 20:13:18.843234 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:13:18.843247 | orchestrator | 2025-05-13 20:13:18.843259 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-05-13 20:13:18.843271 | orchestrator | Tuesday 13 May 2025 20:12:43 +0000 (0:00:01.876) 0:02:24.341 *********** 2025-05-13 20:13:18.843283 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:13:18.843295 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:13:18.843307 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:13:18.843319 | orchestrator | 2025-05-13 20:13:18.843331 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-05-13 20:13:18.843343 | orchestrator | Tuesday 13 May 2025 20:12:44 +0000 (0:00:01.236) 0:02:25.578 *********** 2025-05-13 20:13:18.843356 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:13:18.843368 | orchestrator | 2025-05-13 20:13:18.843380 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-05-13 20:13:18.843392 | orchestrator | Tuesday 13 May 2025 20:12:46 +0000 (0:00:02.183) 0:02:27.761 *********** 2025-05-13 20:13:18.843405 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-05-13 20:13:18.843417 | orchestrator | 2025-05-13 20:13:18.843429 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-05-13 20:13:18.843441 | orchestrator | Tuesday 13 May 2025 20:12:57 +0000 (0:00:10.784) 0:02:38.546 *********** 2025-05-13 20:13:18.843453 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-05-13 20:13:18.843473 | orchestrator | 2025-05-13 20:13:18.843494 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-05-13 20:13:18.843507 | orchestrator | Tuesday 13 May 2025 20:13:07 +0000 (0:00:09.503) 0:02:48.049 *********** 2025-05-13 20:13:18.843519 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-05-13 20:13:18.843531 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-05-13 20:13:18.843543 | orchestrator | 2025-05-13 20:13:18.843555 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-05-13 20:13:18.843566 | orchestrator | Tuesday 13 May 2025 20:13:13 +0000 (0:00:06.066) 0:02:54.116 *********** 2025-05-13 20:13:18.843577 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:13:18.843588 | orchestrator | 2025-05-13 20:13:18.843598 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-05-13 20:13:18.843609 | orchestrator | Tuesday 13 May 2025 20:13:13 +0000 (0:00:00.312) 0:02:54.428 *********** 2025-05-13 20:13:18.843619 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:13:18.843630 | orchestrator | 2025-05-13 20:13:18.843640 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-05-13 20:13:18.843651 | orchestrator | Tuesday 13 May 2025 20:13:13 +0000 (0:00:00.128) 0:02:54.557 *********** 2025-05-13 20:13:18.843662 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:13:18.843672 | orchestrator | 2025-05-13 20:13:18.843683 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-05-13 20:13:18.843694 | orchestrator | Tuesday 13 May 2025 20:13:13 +0000 (0:00:00.143) 0:02:54.701 *********** 2025-05-13 20:13:18.843704 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:13:18.843715 | orchestrator | 2025-05-13 20:13:18.843726 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-05-13 20:13:18.843737 | orchestrator | Tuesday 13 May 2025 20:13:14 +0000 (0:00:00.320) 0:02:55.021 *********** 2025-05-13 20:13:18.843748 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:13:18.843758 | orchestrator | 2025-05-13 20:13:18.843769 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-13 20:13:18.843780 | orchestrator | Tuesday 13 May 2025 20:13:17 +0000 (0:00:02.996) 0:02:58.018 *********** 2025-05-13 20:13:18.843796 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:13:18.843807 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:13:18.843818 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:13:18.843828 | orchestrator | 2025-05-13 20:13:18.843839 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:13:18.843850 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-13 20:13:18.843863 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-13 20:13:18.843874 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-13 20:13:18.843884 | orchestrator | 2025-05-13 20:13:18.843895 | orchestrator | 2025-05-13 20:13:18.843906 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:13:18.843917 | orchestrator | Tuesday 13 May 2025 20:13:17 +0000 (0:00:00.623) 0:02:58.641 *********** 2025-05-13 20:13:18.843927 | orchestrator | =============================================================================== 2025-05-13 20:13:18.843938 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 46.67s 2025-05-13 20:13:18.843948 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.88s 2025-05-13 20:13:18.843959 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.83s 2025-05-13 20:13:18.843970 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.78s 2025-05-13 20:13:18.843980 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.64s 2025-05-13 20:13:18.843997 | orchestrator | service-ks-register : keystone | Creating services ---------------------- 9.50s 2025-05-13 20:13:18.844008 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.97s 2025-05-13 20:13:18.844019 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.07s 2025-05-13 20:13:18.844029 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.99s 2025-05-13 20:13:18.844040 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.89s 2025-05-13 20:13:18.844051 | orchestrator | keystone : Copying over config.json files for services ------------------ 4.25s 2025-05-13 20:13:18.844062 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.47s 2025-05-13 20:13:18.844125 | orchestrator | keystone : Creating default user role ----------------------------------- 3.00s 2025-05-13 20:13:18.844137 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.89s 2025-05-13 20:13:18.844148 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.71s 2025-05-13 20:13:18.844159 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.60s 2025-05-13 20:13:18.844169 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.42s 2025-05-13 20:13:18.844180 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.21s 2025-05-13 20:13:18.844190 | orchestrator | keystone : Run key distribution ----------------------------------------- 2.18s 2025-05-13 20:13:18.844201 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.10s 2025-05-13 20:13:18.844218 | orchestrator | 2025-05-13 20:13:18 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:13:18.844229 | orchestrator | 2025-05-13 20:13:18 | INFO  | Task 88a1cc8b-c25b-4fec-a1d0-fdf82b628080 is in state STARTED 2025-05-13 20:13:18.844240 | orchestrator | 2025-05-13 20:13:18 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:13:18.844251 | orchestrator | 2025-05-13 20:13:18 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:13:21.894636 | orchestrator | 2025-05-13 20:13:21 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:13:21.895107 | orchestrator | 2025-05-13 20:13:21 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:13:21.899731 | orchestrator | 2025-05-13 20:13:21 | INFO  | Task 88a1cc8b-c25b-4fec-a1d0-fdf82b628080 is in state STARTED 2025-05-13 20:13:21.903514 | orchestrator | 2025-05-13 20:13:21 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:13:21.907187 | orchestrator | 2025-05-13 20:13:21 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:13:21.907263 | orchestrator | 2025-05-13 20:13:21 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:13:24.955589 | orchestrator | 2025-05-13 20:13:24 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:13:24.956929 | orchestrator | 2025-05-13 20:13:24 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:13:24.959132 | orchestrator | 2025-05-13 20:13:24 | INFO  | Task 88a1cc8b-c25b-4fec-a1d0-fdf82b628080 is in state STARTED 2025-05-13 20:13:24.961596 | orchestrator | 2025-05-13 20:13:24 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:13:24.964137 | orchestrator | 2025-05-13 20:13:24 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:13:24.964205 | orchestrator | 2025-05-13 20:13:24 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:13:28.009380 | orchestrator | 2025-05-13 20:13:28 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:13:28.009544 | orchestrator | 2025-05-13 20:13:28 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:13:28.010324 | orchestrator | 2025-05-13 20:13:28 | INFO  | Task 88a1cc8b-c25b-4fec-a1d0-fdf82b628080 is in state STARTED 2025-05-13 20:13:28.012969 | orchestrator | 2025-05-13 20:13:28 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:13:28.013000 | orchestrator | 2025-05-13 20:13:28 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:13:28.013009 | orchestrator | 2025-05-13 20:13:28 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:13:31.054365 | orchestrator | 2025-05-13 20:13:31 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state STARTED 2025-05-13 20:13:31.054489 | orchestrator | 2025-05-13 20:13:31 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:13:31.056841 | orchestrator | 2025-05-13 20:13:31 | INFO  | Task 88a1cc8b-c25b-4fec-a1d0-fdf82b628080 is in state STARTED 2025-05-13 20:13:31.060421 | orchestrator | 2025-05-13 20:13:31 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:13:31.060469 | orchestrator | 2025-05-13 20:13:31 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:13:31.061502 | orchestrator | 2025-05-13 20:13:31 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:13:34.101935 | orchestrator | 2025-05-13 20:13:34 | INFO  | Task fc710b4a-aa41-4448-8c24-563d619cc389 is in state SUCCESS 2025-05-13 20:13:34.102524 | orchestrator | 2025-05-13 20:13:34 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:13:34.104986 | orchestrator | 2025-05-13 20:13:34 | INFO  | Task 88a1cc8b-c25b-4fec-a1d0-fdf82b628080 is in state STARTED 2025-05-13 20:13:34.106974 | orchestrator | 2025-05-13 20:13:34 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:13:34.110324 | orchestrator | 2025-05-13 20:13:34 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:13:34.110382 | orchestrator | 2025-05-13 20:13:34 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:13:37.155588 | orchestrator | 2025-05-13 20:13:37 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:13:37.155953 | orchestrator | 2025-05-13 20:13:37.155984 | orchestrator | 2025-05-13 20:13:37.155996 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-05-13 20:13:37.156009 | orchestrator | 2025-05-13 20:13:37.156019 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-05-13 20:13:37.156030 | orchestrator | Tuesday 13 May 2025 20:12:12 +0000 (0:00:00.271) 0:00:00.271 *********** 2025-05-13 20:13:37.156041 | orchestrator | changed: [testbed-manager] 2025-05-13 20:13:37.156085 | orchestrator | 2025-05-13 20:13:37.156098 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-05-13 20:13:37.156109 | orchestrator | Tuesday 13 May 2025 20:12:15 +0000 (0:00:02.366) 0:00:02.637 *********** 2025-05-13 20:13:37.156119 | orchestrator | changed: [testbed-manager] 2025-05-13 20:13:37.156130 | orchestrator | 2025-05-13 20:13:37.156141 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-05-13 20:13:37.156151 | orchestrator | Tuesday 13 May 2025 20:12:16 +0000 (0:00:01.037) 0:00:03.674 *********** 2025-05-13 20:13:37.156162 | orchestrator | changed: [testbed-manager] 2025-05-13 20:13:37.156172 | orchestrator | 2025-05-13 20:13:37.156183 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-05-13 20:13:37.156194 | orchestrator | Tuesday 13 May 2025 20:12:17 +0000 (0:00:01.231) 0:00:04.906 *********** 2025-05-13 20:13:37.156204 | orchestrator | changed: [testbed-manager] 2025-05-13 20:13:37.156239 | orchestrator | 2025-05-13 20:13:37.156250 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-05-13 20:13:37.156261 | orchestrator | Tuesday 13 May 2025 20:12:18 +0000 (0:00:01.099) 0:00:06.005 *********** 2025-05-13 20:13:37.156272 | orchestrator | changed: [testbed-manager] 2025-05-13 20:13:37.156282 | orchestrator | 2025-05-13 20:13:37.156292 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-05-13 20:13:37.156303 | orchestrator | Tuesday 13 May 2025 20:12:19 +0000 (0:00:01.330) 0:00:07.336 *********** 2025-05-13 20:13:37.156313 | orchestrator | changed: [testbed-manager] 2025-05-13 20:13:37.156324 | orchestrator | 2025-05-13 20:13:37.156334 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-05-13 20:13:37.156345 | orchestrator | Tuesday 13 May 2025 20:12:21 +0000 (0:00:01.079) 0:00:08.415 *********** 2025-05-13 20:13:37.156355 | orchestrator | changed: [testbed-manager] 2025-05-13 20:13:37.156366 | orchestrator | 2025-05-13 20:13:37.156376 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-05-13 20:13:37.156399 | orchestrator | Tuesday 13 May 2025 20:12:23 +0000 (0:00:02.091) 0:00:10.506 *********** 2025-05-13 20:13:37.156410 | orchestrator | changed: [testbed-manager] 2025-05-13 20:13:37.156420 | orchestrator | 2025-05-13 20:13:37.156431 | orchestrator | TASK [Create admin user] ******************************************************* 2025-05-13 20:13:37.156441 | orchestrator | Tuesday 13 May 2025 20:12:24 +0000 (0:00:01.303) 0:00:11.809 *********** 2025-05-13 20:13:37.156452 | orchestrator | changed: [testbed-manager] 2025-05-13 20:13:37.156462 | orchestrator | 2025-05-13 20:13:37.156486 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-05-13 20:13:37.156497 | orchestrator | Tuesday 13 May 2025 20:13:06 +0000 (0:00:42.351) 0:00:54.161 *********** 2025-05-13 20:13:37.156508 | orchestrator | skipping: [testbed-manager] 2025-05-13 20:13:37.156518 | orchestrator | 2025-05-13 20:13:37.156529 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-13 20:13:37.156539 | orchestrator | 2025-05-13 20:13:37.156550 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-13 20:13:37.156561 | orchestrator | Tuesday 13 May 2025 20:13:06 +0000 (0:00:00.165) 0:00:54.326 *********** 2025-05-13 20:13:37.156573 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:13:37.156585 | orchestrator | 2025-05-13 20:13:37.156597 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-13 20:13:37.156618 | orchestrator | 2025-05-13 20:13:37.156631 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-13 20:13:37.156643 | orchestrator | Tuesday 13 May 2025 20:13:18 +0000 (0:00:11.674) 0:01:06.000 *********** 2025-05-13 20:13:37.156655 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:13:37.156667 | orchestrator | 2025-05-13 20:13:37.156679 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-13 20:13:37.156690 | orchestrator | 2025-05-13 20:13:37.156703 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-13 20:13:37.156715 | orchestrator | Tuesday 13 May 2025 20:13:19 +0000 (0:00:01.263) 0:01:07.264 *********** 2025-05-13 20:13:37.156727 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:13:37.156738 | orchestrator | 2025-05-13 20:13:37.156751 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:13:37.156764 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-13 20:13:37.156777 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:13:37.156790 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:13:37.156802 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:13:37.156821 | orchestrator | 2025-05-13 20:13:37.156833 | orchestrator | 2025-05-13 20:13:37.156844 | orchestrator | 2025-05-13 20:13:37.156856 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:13:37.156869 | orchestrator | Tuesday 13 May 2025 20:13:31 +0000 (0:00:11.222) 0:01:18.487 *********** 2025-05-13 20:13:37.156881 | orchestrator | =============================================================================== 2025-05-13 20:13:37.156893 | orchestrator | Create admin user ------------------------------------------------------ 42.35s 2025-05-13 20:13:37.156905 | orchestrator | Restart ceph manager service ------------------------------------------- 24.16s 2025-05-13 20:13:37.156930 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.37s 2025-05-13 20:13:37.156941 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.09s 2025-05-13 20:13:37.156952 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.33s 2025-05-13 20:13:37.156962 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.30s 2025-05-13 20:13:37.156972 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.23s 2025-05-13 20:13:37.156983 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.10s 2025-05-13 20:13:37.156993 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.08s 2025-05-13 20:13:37.157004 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.04s 2025-05-13 20:13:37.157014 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.17s 2025-05-13 20:13:37.157025 | orchestrator | 2025-05-13 20:13:37.157035 | orchestrator | 2025-05-13 20:13:37.157046 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:13:37.157072 | orchestrator | 2025-05-13 20:13:37.157083 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:13:37.157093 | orchestrator | Tuesday 13 May 2025 20:12:53 +0000 (0:00:00.269) 0:00:00.269 *********** 2025-05-13 20:13:37.157104 | orchestrator | ok: [testbed-manager] 2025-05-13 20:13:37.157115 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:13:37.157125 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:13:37.157136 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:13:37.157147 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:13:37.157157 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:13:37.157168 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:13:37.157178 | orchestrator | 2025-05-13 20:13:37.157189 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:13:37.157200 | orchestrator | Tuesday 13 May 2025 20:12:54 +0000 (0:00:00.665) 0:00:00.935 *********** 2025-05-13 20:13:37.157211 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-05-13 20:13:37.157222 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-05-13 20:13:37.157232 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-05-13 20:13:37.157243 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-05-13 20:13:37.157253 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-05-13 20:13:37.157269 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-05-13 20:13:37.157280 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-05-13 20:13:37.157290 | orchestrator | 2025-05-13 20:13:37.157301 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-13 20:13:37.157312 | orchestrator | 2025-05-13 20:13:37.157322 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-05-13 20:13:37.157333 | orchestrator | Tuesday 13 May 2025 20:12:55 +0000 (0:00:00.576) 0:00:01.511 *********** 2025-05-13 20:13:37.157343 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:13:37.157355 | orchestrator | 2025-05-13 20:13:37.157365 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-05-13 20:13:37.157382 | orchestrator | Tuesday 13 May 2025 20:12:56 +0000 (0:00:01.406) 0:00:02.918 *********** 2025-05-13 20:13:37.157393 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-05-13 20:13:37.157403 | orchestrator | 2025-05-13 20:13:37.157414 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-05-13 20:13:37.157424 | orchestrator | Tuesday 13 May 2025 20:13:11 +0000 (0:00:15.429) 0:00:18.348 *********** 2025-05-13 20:13:37.157435 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-05-13 20:13:37.157446 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-05-13 20:13:37.157457 | orchestrator | 2025-05-13 20:13:37.157467 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-05-13 20:13:37.157478 | orchestrator | Tuesday 13 May 2025 20:13:18 +0000 (0:00:06.344) 0:00:24.692 *********** 2025-05-13 20:13:37.157489 | orchestrator | ok: [testbed-manager] => (item=service) 2025-05-13 20:13:37.157499 | orchestrator | 2025-05-13 20:13:37.157510 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-05-13 20:13:37.157520 | orchestrator | Tuesday 13 May 2025 20:13:21 +0000 (0:00:03.233) 0:00:27.926 *********** 2025-05-13 20:13:37.157531 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-13 20:13:37.157541 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-05-13 20:13:37.157552 | orchestrator | 2025-05-13 20:13:37.157562 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-05-13 20:13:37.157573 | orchestrator | Tuesday 13 May 2025 20:13:25 +0000 (0:00:03.667) 0:00:31.594 *********** 2025-05-13 20:13:37.157583 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-05-13 20:13:37.157594 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-05-13 20:13:37.157605 | orchestrator | 2025-05-13 20:13:37.157615 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-05-13 20:13:37.157626 | orchestrator | Tuesday 13 May 2025 20:13:31 +0000 (0:00:06.113) 0:00:37.707 *********** 2025-05-13 20:13:37.157637 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-05-13 20:13:37.157647 | orchestrator | 2025-05-13 20:13:37.157658 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:13:37.157669 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:13:37.157687 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:13:37.157698 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:13:37.157709 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:13:37.157720 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:13:37.157730 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:13:37.157741 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:13:37.157752 | orchestrator | 2025-05-13 20:13:37.157762 | orchestrator | 2025-05-13 20:13:37.157773 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:13:37.157784 | orchestrator | Tuesday 13 May 2025 20:13:35 +0000 (0:00:04.654) 0:00:42.361 *********** 2025-05-13 20:13:37.157795 | orchestrator | =============================================================================== 2025-05-13 20:13:37.157811 | orchestrator | service-ks-register : ceph-rgw | Creating services --------------------- 15.43s 2025-05-13 20:13:37.157822 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.34s 2025-05-13 20:13:37.157832 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.11s 2025-05-13 20:13:37.157843 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.65s 2025-05-13 20:13:37.157854 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.67s 2025-05-13 20:13:37.157865 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.23s 2025-05-13 20:13:37.157880 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.41s 2025-05-13 20:13:37.157890 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.67s 2025-05-13 20:13:37.157901 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2025-05-13 20:13:37.157912 | orchestrator | 2025-05-13 20:13:37 | INFO  | Task 88a1cc8b-c25b-4fec-a1d0-fdf82b628080 is in state SUCCESS 2025-05-13 20:13:37.159758 | orchestrator | 2025-05-13 20:13:37 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:13:37.160548 | orchestrator | 2025-05-13 20:13:37 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:13:37.160852 | orchestrator | 2025-05-13 20:13:37 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:13:40.206676 | orchestrator | 2025-05-13 20:13:40 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:13:40.207692 | orchestrator | 2025-05-13 20:13:40 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:13:40.209294 | orchestrator | 2025-05-13 20:13:40 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:13:40.210410 | orchestrator | 2025-05-13 20:13:40 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:13:40.210471 | orchestrator | 2025-05-13 20:13:40 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:13:43.251928 | orchestrator | 2025-05-13 20:13:43 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:13:43.252020 | orchestrator | 2025-05-13 20:13:43 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:13:43.253200 | orchestrator | 2025-05-13 20:13:43 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:13:43.253449 | orchestrator | 2025-05-13 20:13:43 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:13:43.253860 | orchestrator | 2025-05-13 20:13:43 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:13:46.295305 | orchestrator | 2025-05-13 20:13:46 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:13:46.295770 | orchestrator | 2025-05-13 20:13:46 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:13:46.296785 | orchestrator | 2025-05-13 20:13:46 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:13:46.297964 | orchestrator | 2025-05-13 20:13:46 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:13:46.297998 | orchestrator | 2025-05-13 20:13:46 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:13:49.370448 | orchestrator | 2025-05-13 20:13:49 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:13:49.372200 | orchestrator | 2025-05-13 20:13:49 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:13:49.374287 | orchestrator | 2025-05-13 20:13:49 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:13:49.374431 | orchestrator | 2025-05-13 20:13:49 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:13:49.374461 | orchestrator | 2025-05-13 20:13:49 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:13:52.418514 | orchestrator | 2025-05-13 20:13:52 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:13:52.418664 | orchestrator | 2025-05-13 20:13:52 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:13:52.419731 | orchestrator | 2025-05-13 20:13:52 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:13:52.421909 | orchestrator | 2025-05-13 20:13:52 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:13:52.422351 | orchestrator | 2025-05-13 20:13:52 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:13:55.468703 | orchestrator | 2025-05-13 20:13:55 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:13:55.469280 | orchestrator | 2025-05-13 20:13:55 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:13:55.472438 | orchestrator | 2025-05-13 20:13:55 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:13:55.472750 | orchestrator | 2025-05-13 20:13:55 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:13:55.475463 | orchestrator | 2025-05-13 20:13:55 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:13:58.516689 | orchestrator | 2025-05-13 20:13:58 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:13:58.516806 | orchestrator | 2025-05-13 20:13:58 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:13:58.520176 | orchestrator | 2025-05-13 20:13:58 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:13:58.522088 | orchestrator | 2025-05-13 20:13:58 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:13:58.522425 | orchestrator | 2025-05-13 20:13:58 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:14:01.567724 | orchestrator | 2025-05-13 20:14:01 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:14:01.568701 | orchestrator | 2025-05-13 20:14:01 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:14:01.568733 | orchestrator | 2025-05-13 20:14:01 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:14:01.568745 | orchestrator | 2025-05-13 20:14:01 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:14:01.568758 | orchestrator | 2025-05-13 20:14:01 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:14:04.604754 | orchestrator | 2025-05-13 20:14:04 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:14:04.605337 | orchestrator | 2025-05-13 20:14:04 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:14:04.606425 | orchestrator | 2025-05-13 20:14:04 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:14:04.608640 | orchestrator | 2025-05-13 20:14:04 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:14:04.608674 | orchestrator | 2025-05-13 20:14:04 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:14:07.639346 | orchestrator | 2025-05-13 20:14:07 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:14:07.639515 | orchestrator | 2025-05-13 20:14:07 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:14:07.640566 | orchestrator | 2025-05-13 20:14:07 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:14:07.641223 | orchestrator | 2025-05-13 20:14:07 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:14:07.641273 | orchestrator | 2025-05-13 20:14:07 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:14:10.680723 | orchestrator | 2025-05-13 20:14:10 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:14:10.681797 | orchestrator | 2025-05-13 20:14:10 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:14:10.684185 | orchestrator | 2025-05-13 20:14:10 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:14:10.689423 | orchestrator | 2025-05-13 20:14:10 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:14:10.689494 | orchestrator | 2025-05-13 20:14:10 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:14:13.733500 | orchestrator | 2025-05-13 20:14:13 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:14:13.736362 | orchestrator | 2025-05-13 20:14:13 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:14:13.739115 | orchestrator | 2025-05-13 20:14:13 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:14:13.743134 | orchestrator | 2025-05-13 20:14:13 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:14:13.743189 | orchestrator | 2025-05-13 20:14:13 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:14:16.789264 | orchestrator | 2025-05-13 20:14:16 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:14:16.791399 | orchestrator | 2025-05-13 20:14:16 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:14:16.791478 | orchestrator | 2025-05-13 20:14:16 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:14:16.792456 | orchestrator | 2025-05-13 20:14:16 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:14:16.792544 | orchestrator | 2025-05-13 20:14:16 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:14:19.841602 | orchestrator | 2025-05-13 20:14:19 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:14:19.841878 | orchestrator | 2025-05-13 20:14:19 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:14:19.843235 | orchestrator | 2025-05-13 20:14:19 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:14:19.845201 | orchestrator | 2025-05-13 20:14:19 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:14:19.845275 | orchestrator | 2025-05-13 20:14:19 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:14:22.886473 | orchestrator | 2025-05-13 20:14:22 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:14:22.886848 | orchestrator | 2025-05-13 20:14:22 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:14:22.887865 | orchestrator | 2025-05-13 20:14:22 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:14:22.888790 | orchestrator | 2025-05-13 20:14:22 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:14:22.889121 | orchestrator | 2025-05-13 20:14:22 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:14:25.921938 | orchestrator | 2025-05-13 20:14:25 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:14:25.922309 | orchestrator | 2025-05-13 20:14:25 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:14:25.925588 | orchestrator | 2025-05-13 20:14:25 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:14:25.926277 | orchestrator | 2025-05-13 20:14:25 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:14:25.926330 | orchestrator | 2025-05-13 20:14:25 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:14:28.956764 | orchestrator | 2025-05-13 20:14:28 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:14:28.956889 | orchestrator | 2025-05-13 20:14:28 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:14:28.957409 | orchestrator | 2025-05-13 20:14:28 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:14:28.958134 | orchestrator | 2025-05-13 20:14:28 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:14:28.958163 | orchestrator | 2025-05-13 20:14:28 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:14:31.984901 | orchestrator | 2025-05-13 20:14:31 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:14:31.985221 | orchestrator | 2025-05-13 20:14:31 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:14:31.985596 | orchestrator | 2025-05-13 20:14:31 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:14:31.989263 | orchestrator | 2025-05-13 20:14:31 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:14:31.989347 | orchestrator | 2025-05-13 20:14:31 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:14:35.028969 | orchestrator | 2025-05-13 20:14:35 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:14:35.029145 | orchestrator | 2025-05-13 20:14:35 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:14:35.029884 | orchestrator | 2025-05-13 20:14:35 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:14:35.030248 | orchestrator | 2025-05-13 20:14:35 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:14:35.030343 | orchestrator | 2025-05-13 20:14:35 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:14:38.074329 | orchestrator | 2025-05-13 20:14:38 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:14:38.074482 | orchestrator | 2025-05-13 20:14:38 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:14:38.076257 | orchestrator | 2025-05-13 20:14:38 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:14:38.076883 | orchestrator | 2025-05-13 20:14:38 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:14:38.076907 | orchestrator | 2025-05-13 20:14:38 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:14:41.118974 | orchestrator | 2025-05-13 20:14:41 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:14:41.119226 | orchestrator | 2025-05-13 20:14:41 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:14:41.121263 | orchestrator | 2025-05-13 20:14:41 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:14:41.121328 | orchestrator | 2025-05-13 20:14:41 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:14:41.121342 | orchestrator | 2025-05-13 20:14:41 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:14:44.161509 | orchestrator | 2025-05-13 20:14:44 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:14:44.162620 | orchestrator | 2025-05-13 20:14:44 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:14:44.163137 | orchestrator | 2025-05-13 20:14:44 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:14:44.164694 | orchestrator | 2025-05-13 20:14:44 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:14:44.164735 | orchestrator | 2025-05-13 20:14:44 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:14:47.211174 | orchestrator | 2025-05-13 20:14:47 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:14:47.211387 | orchestrator | 2025-05-13 20:14:47 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:14:47.212094 | orchestrator | 2025-05-13 20:14:47 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:14:47.212884 | orchestrator | 2025-05-13 20:14:47 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:14:47.213003 | orchestrator | 2025-05-13 20:14:47 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:14:50.254789 | orchestrator | 2025-05-13 20:14:50 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:14:50.254907 | orchestrator | 2025-05-13 20:14:50 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:14:50.256382 | orchestrator | 2025-05-13 20:14:50 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:14:50.256755 | orchestrator | 2025-05-13 20:14:50 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:14:50.256789 | orchestrator | 2025-05-13 20:14:50 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:14:53.297163 | orchestrator | 2025-05-13 20:14:53 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:14:53.297307 | orchestrator | 2025-05-13 20:14:53 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:14:53.301367 | orchestrator | 2025-05-13 20:14:53 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:14:53.301768 | orchestrator | 2025-05-13 20:14:53 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:14:53.303304 | orchestrator | 2025-05-13 20:14:53 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:14:56.339211 | orchestrator | 2025-05-13 20:14:56 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:14:56.340085 | orchestrator | 2025-05-13 20:14:56 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:14:56.341038 | orchestrator | 2025-05-13 20:14:56 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:14:56.342679 | orchestrator | 2025-05-13 20:14:56 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:14:56.343182 | orchestrator | 2025-05-13 20:14:56 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:14:59.387215 | orchestrator | 2025-05-13 20:14:59 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:14:59.388559 | orchestrator | 2025-05-13 20:14:59 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:14:59.389660 | orchestrator | 2025-05-13 20:14:59 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:14:59.391104 | orchestrator | 2025-05-13 20:14:59 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:14:59.391251 | orchestrator | 2025-05-13 20:14:59 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:15:02.446113 | orchestrator | 2025-05-13 20:15:02 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:15:02.446304 | orchestrator | 2025-05-13 20:15:02 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:15:02.451744 | orchestrator | 2025-05-13 20:15:02 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:15:02.452835 | orchestrator | 2025-05-13 20:15:02 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:15:02.452867 | orchestrator | 2025-05-13 20:15:02 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:15:05.499055 | orchestrator | 2025-05-13 20:15:05 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:15:05.499951 | orchestrator | 2025-05-13 20:15:05 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:15:05.500920 | orchestrator | 2025-05-13 20:15:05 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:15:05.503589 | orchestrator | 2025-05-13 20:15:05 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:15:05.504041 | orchestrator | 2025-05-13 20:15:05 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:15:08.554363 | orchestrator | 2025-05-13 20:15:08 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:15:08.555369 | orchestrator | 2025-05-13 20:15:08 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:15:08.557243 | orchestrator | 2025-05-13 20:15:08 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:15:08.559343 | orchestrator | 2025-05-13 20:15:08 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:15:08.559372 | orchestrator | 2025-05-13 20:15:08 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:15:11.605679 | orchestrator | 2025-05-13 20:15:11 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:15:11.608011 | orchestrator | 2025-05-13 20:15:11 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:15:11.610093 | orchestrator | 2025-05-13 20:15:11 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:15:11.612266 | orchestrator | 2025-05-13 20:15:11 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:15:11.612312 | orchestrator | 2025-05-13 20:15:11 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:15:14.673666 | orchestrator | 2025-05-13 20:15:14 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:15:14.675381 | orchestrator | 2025-05-13 20:15:14 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:15:14.682255 | orchestrator | 2025-05-13 20:15:14 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:15:14.682662 | orchestrator | 2025-05-13 20:15:14 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:15:14.682675 | orchestrator | 2025-05-13 20:15:14 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:15:17.732928 | orchestrator | 2025-05-13 20:15:17 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:15:17.735058 | orchestrator | 2025-05-13 20:15:17 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:15:17.737885 | orchestrator | 2025-05-13 20:15:17 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:15:17.740241 | orchestrator | 2025-05-13 20:15:17 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:15:17.740298 | orchestrator | 2025-05-13 20:15:17 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:15:20.785879 | orchestrator | 2025-05-13 20:15:20 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:15:20.786216 | orchestrator | 2025-05-13 20:15:20 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state STARTED 2025-05-13 20:15:20.787091 | orchestrator | 2025-05-13 20:15:20 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:15:20.788099 | orchestrator | 2025-05-13 20:15:20 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:15:20.788132 | orchestrator | 2025-05-13 20:15:20 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:15:23.834788 | orchestrator | 2025-05-13 20:15:23 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:15:23.842092 | orchestrator | 2025-05-13 20:15:23 | INFO  | Task b56c5d65-aadb-4ddf-973c-33791c4d0553 is in state SUCCESS 2025-05-13 20:15:23.845208 | orchestrator | 2025-05-13 20:15:23.845286 | orchestrator | 2025-05-13 20:15:23.845319 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:15:23.845341 | orchestrator | 2025-05-13 20:15:23.845360 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:15:23.845381 | orchestrator | Tuesday 13 May 2025 20:12:12 +0000 (0:00:00.274) 0:00:00.274 *********** 2025-05-13 20:15:23.845594 | orchestrator | ok: [testbed-manager] 2025-05-13 20:15:23.845627 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:15:23.845815 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:15:23.845838 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:15:23.845859 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:15:23.845878 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:15:23.845897 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:15:23.845917 | orchestrator | 2025-05-13 20:15:23.845936 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:15:23.845983 | orchestrator | Tuesday 13 May 2025 20:12:13 +0000 (0:00:00.904) 0:00:01.179 *********** 2025-05-13 20:15:23.846005 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-05-13 20:15:23.846079 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-05-13 20:15:23.846093 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-05-13 20:15:23.846104 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-05-13 20:15:23.846159 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-05-13 20:15:23.846197 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-05-13 20:15:23.846208 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-05-13 20:15:23.846269 | orchestrator | 2025-05-13 20:15:23.846280 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-05-13 20:15:23.846291 | orchestrator | 2025-05-13 20:15:23.846303 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-13 20:15:23.846313 | orchestrator | Tuesday 13 May 2025 20:12:14 +0000 (0:00:00.781) 0:00:01.960 *********** 2025-05-13 20:15:23.846344 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:15:23.846427 | orchestrator | 2025-05-13 20:15:23.846441 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-05-13 20:15:23.846452 | orchestrator | Tuesday 13 May 2025 20:12:16 +0000 (0:00:01.620) 0:00:03.581 *********** 2025-05-13 20:15:23.846466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.846525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.846540 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-13 20:15:23.846553 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.846596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.846610 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.846621 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.846643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.846655 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.846667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.846679 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.846692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.846717 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.846732 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.846751 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.846824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.846837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.846850 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.846861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.846872 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.846898 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-13 20:15:23.846921 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.846933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.846970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.846982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.846994 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.847005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.847041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.847053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.847072 | orchestrator | 2025-05-13 20:15:23.847083 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-13 20:15:23.847094 | orchestrator | Tuesday 13 May 2025 20:12:19 +0000 (0:00:03.178) 0:00:06.760 *********** 2025-05-13 20:15:23.847105 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:15:23.847116 | orchestrator | 2025-05-13 20:15:23.847126 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-05-13 20:15:23.847137 | orchestrator | Tuesday 13 May 2025 20:12:20 +0000 (0:00:01.487) 0:00:08.247 *********** 2025-05-13 20:15:23.847148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.847176 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-13 20:15:23.847188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.847224 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.847250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.847271 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.847282 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.847293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.847305 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.847316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.847339 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.847352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.847375 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.847438 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.847451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.847463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.847539 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.847552 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.847563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.847574 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.847605 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.847697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.847709 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-13 20:15:23.847722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.847733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.847744 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.847756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.847789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.847802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.847813 | orchestrator | 2025-05-13 20:15:23.847824 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-05-13 20:15:23.847834 | orchestrator | Tuesday 13 May 2025 20:12:26 +0000 (0:00:06.046) 0:00:14.294 *********** 2025-05-13 20:15:23.847846 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-13 20:15:23.847857 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 20:15:23.847868 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 20:15:23.847880 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-13 20:15:23.847910 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:15:23.847922 | orchestrator | skipping: [testbed-manager] 2025-05-13 20:15:23.847934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 20:15:23.848002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:15:23.848014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:15:23.848025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 20:15:23.848036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:15:23.848048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 20:15:23.848066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:15:23.848089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:15:23.848101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 20:15:23.848113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:15:23.848124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 20:15:23.848135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:15:23.848146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:15:23.848157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 20:15:23.848176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:15:23.848187 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:15:23.848198 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:15:23.848209 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:15:23.848232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 20:15:23.848244 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 20:15:23.848256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-13 20:15:23.848266 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:15:23.848278 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 20:15:23.848289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 20:15:23.848300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-13 20:15:23.848317 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:15:23.848328 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 20:15:23.848340 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 20:15:23.848365 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-13 20:15:23.848378 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:15:23.848388 | orchestrator | 2025-05-13 20:15:23.848399 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-05-13 20:15:23.848410 | orchestrator | Tuesday 13 May 2025 20:12:28 +0000 (0:00:01.689) 0:00:15.984 *********** 2025-05-13 20:15:23.848422 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-13 20:15:23.848433 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 20:15:23.848444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 20:15:23.848462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:15:23.848474 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 20:15:23.848485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:15:23.848508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 20:15:23.848520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:15:23.848532 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-13 20:15:23.848544 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:15:23.848562 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:15:23.848573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 20:15:23.848585 | orchestrator | skipping: [testbed-manager] 2025-05-13 20:15:23.848595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:15:23.848607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:15:23.848633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 20:15:23.848645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:15:23.848656 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:15:23.848667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 20:15:23.848686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:15:23.848716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:15:23.848737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 20:15:23.848757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-13 20:15:23.848774 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:15:23.848809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 20:15:23.848830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 20:15:23.848849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-13 20:15:23.848869 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:15:23.848884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 20:15:23.848904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 20:15:23.848916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-13 20:15:23.848927 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:15:23.848939 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-13 20:15:23.848972 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-13 20:15:23.848998 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-13 20:15:23.849010 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:15:23.849020 | orchestrator | 2025-05-13 20:15:23.849031 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-05-13 20:15:23.849043 | orchestrator | Tuesday 13 May 2025 20:12:30 +0000 (0:00:01.991) 0:00:17.975 *********** 2025-05-13 20:15:23.849054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.849066 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-13 20:15:23.849099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.849119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.849140 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.849160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.849198 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.849220 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.849242 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.849274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.849295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.849316 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.849337 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.849358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.849396 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.849419 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.849449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.849462 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.849473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.849485 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-13 20:15:23.849498 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.849523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.849535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.849559 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.849570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.849582 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.849593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.849605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.849616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.849627 | orchestrator | 2025-05-13 20:15:23.849638 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-05-13 20:15:23.849649 | orchestrator | Tuesday 13 May 2025 20:12:36 +0000 (0:00:06.010) 0:00:23.985 *********** 2025-05-13 20:15:23.849660 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-13 20:15:23.849671 | orchestrator | 2025-05-13 20:15:23.849682 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-05-13 20:15:23.850319 | orchestrator | Tuesday 13 May 2025 20:12:37 +0000 (0:00:01.241) 0:00:25.226 *********** 2025-05-13 20:15:23.850358 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100542, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9623954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850383 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100542, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9623954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850396 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1100499, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.937395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850408 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100542, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9623954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850419 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100542, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9623954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850430 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100542, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9623954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 20:15:23.850475 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100542, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9623954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850501 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1100461, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9303951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850513 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1100499, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.937395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850524 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1100499, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.937395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850535 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1100499, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.937395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850547 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100542, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9623954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850558 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1100462, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.931395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850593 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1100499, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.937395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850617 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1100461, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9303951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850631 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1100461, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9303951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850642 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1100499, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.937395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850654 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1100461, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9303951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850665 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1100461, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9303951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850677 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1100495, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9363952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850688 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1100499, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.937395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 20:15:23.850735 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1100462, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.931395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850749 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1100462, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.931395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850760 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1100462, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.931395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850771 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1100462, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.931395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850783 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1100461, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9303951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850794 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1100470, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9333951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850812 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1100495, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9363952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850854 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1100495, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9363952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850869 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1100495, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9363952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850882 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1100495, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9363952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850894 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1100493, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.935395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850908 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1100470, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9333951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850921 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1100462, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.931395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.850941 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1100470, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9333951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851052 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1100461, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9303951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 20:15:23.851067 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1100470, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9333951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851080 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1100493, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.935395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851093 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1100470, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9333951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851106 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1100493, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.935395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851118 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1100525, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.940395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851140 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1100495, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9363952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851185 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1100525, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.940395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851198 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1100525, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.940395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851210 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1100493, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.935395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851222 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1100493, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.935395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851233 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1100539, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9413953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851245 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1100525, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.940395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851263 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1100462, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.931395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 20:15:23.851305 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1100470, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9333951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851318 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1100525, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.940395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851330 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1100539, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9413953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851341 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1100539, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9413953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851352 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1100539, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9413953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851370 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1100539, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9413953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851381 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1100712, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9653955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851423 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1100712, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9653955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851436 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1100493, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.935395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851447 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1100712, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9653955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851459 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1100712, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9653955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851470 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1100712, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9653955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851491 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1100532, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.940395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851503 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1100532, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.940395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851545 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1100532, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.940395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851557 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1100495, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9363952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 20:15:23.851566 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1100532, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.940395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851577 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1100532, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.940395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851586 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1100525, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.940395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851604 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100467, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.932395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851614 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100467, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.932395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851652 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100467, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.932395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851664 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100467, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.932395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851674 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100467, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.932395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851684 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1100482, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.934395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851694 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1100539, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9413953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851711 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1100482, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.934395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851722 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100458, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9303951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851741 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1100482, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.934395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851752 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1100482, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.934395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851762 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1100482, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.934395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851772 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1100712, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9653955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851790 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1100498, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9363952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851800 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100458, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9303951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.851810 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1100470, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9333951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 20:15:23.852002 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1100532, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.940395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852041 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1100498, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9363952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852052 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100458, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9303951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852062 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100458, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9303951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852082 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100467, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.932395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852092 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100458, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9303951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852103 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1100707, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9653955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852128 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1100498, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9363952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852140 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1100707, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9653955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852150 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1100477, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.934395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852160 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1100498, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9363952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852177 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1100498, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9363952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852187 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1100482, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.934395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852197 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1100493, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.935395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 20:15:23.852217 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1100707, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9653955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852228 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1100692, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9633954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852238 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:15:23.852249 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1100707, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9653955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852266 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1100707, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9653955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852276 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1100477, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.934395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852286 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100458, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9303951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852296 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1100477, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.934395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852315 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1100477, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.934395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852326 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1100477, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.934395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852336 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1100692, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9633954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852353 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:15:23.852363 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1100692, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9633954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852372 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:15:23.852382 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1100498, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9363952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852392 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1100692, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9633954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852402 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:15:23.852411 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1100692, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9633954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852419 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:15:23.852436 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1100525, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.940395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 20:15:23.852445 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1100707, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9653955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852461 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1100477, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.934395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852469 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1100692, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9633954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-13 20:15:23.852477 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:15:23.852485 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1100539, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9413953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 20:15:23.852493 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1100712, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9653955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 20:15:23.852501 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1100532, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.940395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 20:15:23.852518 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100467, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.932395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 20:15:23.852527 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1100482, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.934395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 20:15:23.852541 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100458, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9303951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 20:15:23.852550 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1100498, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9363952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 20:15:23.852557 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1100707, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9653955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 20:15:23.852566 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1100477, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.934395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 20:15:23.852574 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1100692, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9633954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-13 20:15:23.852582 | orchestrator | 2025-05-13 20:15:23.852590 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-05-13 20:15:23.852598 | orchestrator | Tuesday 13 May 2025 20:13:01 +0000 (0:00:23.550) 0:00:48.776 *********** 2025-05-13 20:15:23.852606 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-13 20:15:23.852614 | orchestrator | 2025-05-13 20:15:23.852629 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-05-13 20:15:23.852638 | orchestrator | Tuesday 13 May 2025 20:13:02 +0000 (0:00:00.737) 0:00:49.514 *********** 2025-05-13 20:15:23.852646 | orchestrator | [WARNING]: Skipped 2025-05-13 20:15:23.852655 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 20:15:23.852668 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-05-13 20:15:23.852676 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 20:15:23.852684 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-05-13 20:15:23.852692 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 20:15:23.852700 | orchestrator | [WARNING]: Skipped 2025-05-13 20:15:23.852707 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 20:15:23.852715 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-05-13 20:15:23.852723 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 20:15:23.852730 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-05-13 20:15:23.852738 | orchestrator | [WARNING]: Skipped 2025-05-13 20:15:23.852746 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 20:15:23.852753 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-05-13 20:15:23.852761 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 20:15:23.852769 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-05-13 20:15:23.852776 | orchestrator | [WARNING]: Skipped 2025-05-13 20:15:23.852784 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 20:15:23.852791 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-05-13 20:15:23.852799 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 20:15:23.852807 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-05-13 20:15:23.852814 | orchestrator | [WARNING]: Skipped 2025-05-13 20:15:23.852822 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 20:15:23.852830 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-05-13 20:15:23.852837 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 20:15:23.852845 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-05-13 20:15:23.852852 | orchestrator | [WARNING]: Skipped 2025-05-13 20:15:23.852860 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 20:15:23.852868 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-05-13 20:15:23.852875 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 20:15:23.852883 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-05-13 20:15:23.852891 | orchestrator | [WARNING]: Skipped 2025-05-13 20:15:23.852899 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 20:15:23.852906 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-05-13 20:15:23.852914 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-13 20:15:23.852922 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-05-13 20:15:23.852930 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-13 20:15:23.852937 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-13 20:15:23.852963 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-13 20:15:23.852971 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-13 20:15:23.852979 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-13 20:15:23.852987 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-13 20:15:23.852995 | orchestrator | 2025-05-13 20:15:23.853002 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-05-13 20:15:23.853010 | orchestrator | Tuesday 13 May 2025 20:13:03 +0000 (0:00:01.674) 0:00:51.189 *********** 2025-05-13 20:15:23.853018 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-13 20:15:23.853026 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:15:23.853042 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-13 20:15:23.853050 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:15:23.853058 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-13 20:15:23.853066 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:15:23.853073 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-13 20:15:23.853081 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:15:23.853089 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-13 20:15:23.853097 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:15:23.853105 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-13 20:15:23.853113 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:15:23.853120 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-05-13 20:15:23.853128 | orchestrator | 2025-05-13 20:15:23.853136 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-05-13 20:15:23.853144 | orchestrator | Tuesday 13 May 2025 20:13:18 +0000 (0:00:14.969) 0:01:06.158 *********** 2025-05-13 20:15:23.853151 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-13 20:15:23.853164 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:15:23.853176 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-13 20:15:23.853184 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:15:23.853192 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-13 20:15:23.853200 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:15:23.853208 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-13 20:15:23.853215 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:15:23.853223 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-13 20:15:23.853231 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:15:23.853239 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-13 20:15:23.853246 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:15:23.853255 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-05-13 20:15:23.853268 | orchestrator | 2025-05-13 20:15:23.853281 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-05-13 20:15:23.853295 | orchestrator | Tuesday 13 May 2025 20:13:22 +0000 (0:00:03.579) 0:01:09.738 *********** 2025-05-13 20:15:23.853307 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-13 20:15:23.853320 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-13 20:15:23.853332 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-13 20:15:23.853345 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:15:23.853358 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:15:23.853370 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:15:23.853382 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-13 20:15:23.853395 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:15:23.853407 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-05-13 20:15:23.853428 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-13 20:15:23.853441 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:15:23.853454 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-13 20:15:23.853467 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:15:23.853480 | orchestrator | 2025-05-13 20:15:23.853494 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-05-13 20:15:23.853509 | orchestrator | Tuesday 13 May 2025 20:13:24 +0000 (0:00:02.081) 0:01:11.819 *********** 2025-05-13 20:15:23.853523 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-13 20:15:23.853536 | orchestrator | 2025-05-13 20:15:23.853549 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-05-13 20:15:23.853563 | orchestrator | Tuesday 13 May 2025 20:13:25 +0000 (0:00:00.785) 0:01:12.605 *********** 2025-05-13 20:15:23.853575 | orchestrator | skipping: [testbed-manager] 2025-05-13 20:15:23.853587 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:15:23.853600 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:15:23.853613 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:15:23.853626 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:15:23.853638 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:15:23.853646 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:15:23.853654 | orchestrator | 2025-05-13 20:15:23.853661 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-05-13 20:15:23.853669 | orchestrator | Tuesday 13 May 2025 20:13:25 +0000 (0:00:00.717) 0:01:13.323 *********** 2025-05-13 20:15:23.853677 | orchestrator | skipping: [testbed-manager] 2025-05-13 20:15:23.853685 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:15:23.853693 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:15:23.853701 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:15:23.853709 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:15:23.853717 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:15:23.853724 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:15:23.853732 | orchestrator | 2025-05-13 20:15:23.853740 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-05-13 20:15:23.853749 | orchestrator | Tuesday 13 May 2025 20:13:28 +0000 (0:00:02.561) 0:01:15.885 *********** 2025-05-13 20:15:23.853757 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-13 20:15:23.853765 | orchestrator | skipping: [testbed-manager] 2025-05-13 20:15:23.853773 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-13 20:15:23.853781 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-13 20:15:23.853789 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-13 20:15:23.853796 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-13 20:15:23.853804 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:15:23.853812 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:15:23.853820 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:15:23.853840 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:15:23.853849 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-13 20:15:23.853856 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:15:23.853864 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-13 20:15:23.853872 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:15:23.853879 | orchestrator | 2025-05-13 20:15:23.853887 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-05-13 20:15:23.853894 | orchestrator | Tuesday 13 May 2025 20:13:31 +0000 (0:00:03.250) 0:01:19.135 *********** 2025-05-13 20:15:23.853909 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-13 20:15:23.853917 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:15:23.853925 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-13 20:15:23.853933 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:15:23.853941 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-13 20:15:23.853979 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:15:23.853991 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-13 20:15:23.854004 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:15:23.854230 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-13 20:15:23.854249 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:15:23.854257 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-13 20:15:23.854265 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:15:23.854273 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-05-13 20:15:23.854281 | orchestrator | 2025-05-13 20:15:23.854289 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-05-13 20:15:23.854297 | orchestrator | Tuesday 13 May 2025 20:13:34 +0000 (0:00:02.239) 0:01:21.375 *********** 2025-05-13 20:15:23.854305 | orchestrator | [WARNING]: Skipped 2025-05-13 20:15:23.854313 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-05-13 20:15:23.854321 | orchestrator | due to this access issue: 2025-05-13 20:15:23.854329 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-05-13 20:15:23.854337 | orchestrator | not a directory 2025-05-13 20:15:23.854345 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-13 20:15:23.854353 | orchestrator | 2025-05-13 20:15:23.854361 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-05-13 20:15:23.854369 | orchestrator | Tuesday 13 May 2025 20:13:35 +0000 (0:00:01.304) 0:01:22.679 *********** 2025-05-13 20:15:23.854377 | orchestrator | skipping: [testbed-manager] 2025-05-13 20:15:23.854384 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:15:23.854392 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:15:23.854400 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:15:23.854408 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:15:23.854416 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:15:23.854423 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:15:23.854431 | orchestrator | 2025-05-13 20:15:23.854439 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-05-13 20:15:23.854446 | orchestrator | Tuesday 13 May 2025 20:13:36 +0000 (0:00:01.274) 0:01:23.954 *********** 2025-05-13 20:15:23.854454 | orchestrator | skipping: [testbed-manager] 2025-05-13 20:15:23.854462 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:15:23.854470 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:15:23.854478 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:15:23.854485 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:15:23.854493 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:15:23.854501 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:15:23.854508 | orchestrator | 2025-05-13 20:15:23.854516 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-05-13 20:15:23.854524 | orchestrator | Tuesday 13 May 2025 20:13:37 +0000 (0:00:00.918) 0:01:24.872 *********** 2025-05-13 20:15:23.854533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.854554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.854579 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-13 20:15:23.854589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.854598 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.854606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.854615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.854624 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.854638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.854657 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.854666 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.854675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.854683 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-13 20:15:23.854691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.854699 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.854707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.854721 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.854738 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.854746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.854755 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.854763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.854772 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-13 20:15:23.854788 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.854796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.854814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.854823 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-13 20:15:23.854831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.854840 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.854848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-13 20:15:23.854857 | orchestrator | 2025-05-13 20:15:23.854866 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-05-13 20:15:23.854880 | orchestrator | Tuesday 13 May 2025 20:13:42 +0000 (0:00:04.660) 0:01:29.533 *********** 2025-05-13 20:15:23.854892 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-13 20:15:23.854907 | orchestrator | skipping: [testbed-manager] 2025-05-13 20:15:23.854920 | orchestrator | 2025-05-13 20:15:23.854934 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-13 20:15:23.855017 | orchestrator | Tuesday 13 May 2025 20:13:43 +0000 (0:00:01.138) 0:01:30.671 *********** 2025-05-13 20:15:23.855033 | orchestrator | 2025-05-13 20:15:23.855047 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-13 20:15:23.855061 | orchestrator | Tuesday 13 May 2025 20:13:43 +0000 (0:00:00.067) 0:01:30.739 *********** 2025-05-13 20:15:23.855074 | orchestrator | 2025-05-13 20:15:23.855087 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-13 20:15:23.855100 | orchestrator | Tuesday 13 May 2025 20:13:43 +0000 (0:00:00.065) 0:01:30.805 *********** 2025-05-13 20:15:23.855114 | orchestrator | 2025-05-13 20:15:23.855128 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-13 20:15:23.855141 | orchestrator | Tuesday 13 May 2025 20:13:43 +0000 (0:00:00.244) 0:01:31.049 *********** 2025-05-13 20:15:23.855155 | orchestrator | 2025-05-13 20:15:23.855167 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-13 20:15:23.855182 | orchestrator | Tuesday 13 May 2025 20:13:43 +0000 (0:00:00.066) 0:01:31.115 *********** 2025-05-13 20:15:23.855196 | orchestrator | 2025-05-13 20:15:23.855209 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-13 20:15:23.855223 | orchestrator | Tuesday 13 May 2025 20:13:43 +0000 (0:00:00.064) 0:01:31.180 *********** 2025-05-13 20:15:23.855234 | orchestrator | 2025-05-13 20:15:23.855243 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-13 20:15:23.855251 | orchestrator | Tuesday 13 May 2025 20:13:43 +0000 (0:00:00.071) 0:01:31.252 *********** 2025-05-13 20:15:23.855258 | orchestrator | 2025-05-13 20:15:23.855266 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-05-13 20:15:23.855274 | orchestrator | Tuesday 13 May 2025 20:13:44 +0000 (0:00:00.089) 0:01:31.342 *********** 2025-05-13 20:15:23.855281 | orchestrator | changed: [testbed-manager] 2025-05-13 20:15:23.855289 | orchestrator | 2025-05-13 20:15:23.855297 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-05-13 20:15:23.855313 | orchestrator | Tuesday 13 May 2025 20:14:00 +0000 (0:00:16.276) 0:01:47.618 *********** 2025-05-13 20:15:23.855328 | orchestrator | changed: [testbed-manager] 2025-05-13 20:15:23.855336 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:15:23.855344 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:15:23.855352 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:15:23.855359 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:15:23.855367 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:15:23.855375 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:15:23.855382 | orchestrator | 2025-05-13 20:15:23.855389 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-05-13 20:15:23.855396 | orchestrator | Tuesday 13 May 2025 20:14:16 +0000 (0:00:16.554) 0:02:04.173 *********** 2025-05-13 20:15:23.855402 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:15:23.855408 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:15:23.855415 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:15:23.855422 | orchestrator | 2025-05-13 20:15:23.855428 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-05-13 20:15:23.855435 | orchestrator | Tuesday 13 May 2025 20:14:23 +0000 (0:00:06.958) 0:02:11.131 *********** 2025-05-13 20:15:23.855442 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:15:23.855448 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:15:23.855455 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:15:23.855461 | orchestrator | 2025-05-13 20:15:23.855468 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-05-13 20:15:23.855485 | orchestrator | Tuesday 13 May 2025 20:14:35 +0000 (0:00:11.757) 0:02:22.889 *********** 2025-05-13 20:15:23.855492 | orchestrator | changed: [testbed-manager] 2025-05-13 20:15:23.855498 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:15:23.855505 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:15:23.855511 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:15:23.855518 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:15:23.855524 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:15:23.855530 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:15:23.855537 | orchestrator | 2025-05-13 20:15:23.855543 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-05-13 20:15:23.855550 | orchestrator | Tuesday 13 May 2025 20:14:53 +0000 (0:00:18.115) 0:02:41.005 *********** 2025-05-13 20:15:23.855556 | orchestrator | changed: [testbed-manager] 2025-05-13 20:15:23.855563 | orchestrator | 2025-05-13 20:15:23.855569 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-05-13 20:15:23.855576 | orchestrator | Tuesday 13 May 2025 20:15:01 +0000 (0:00:07.962) 0:02:48.967 *********** 2025-05-13 20:15:23.855583 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:15:23.855589 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:15:23.855596 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:15:23.855603 | orchestrator | 2025-05-13 20:15:23.855609 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-05-13 20:15:23.855616 | orchestrator | Tuesday 13 May 2025 20:15:07 +0000 (0:00:05.678) 0:02:54.646 *********** 2025-05-13 20:15:23.855622 | orchestrator | changed: [testbed-manager] 2025-05-13 20:15:23.855629 | orchestrator | 2025-05-13 20:15:23.855635 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-05-13 20:15:23.855642 | orchestrator | Tuesday 13 May 2025 20:15:12 +0000 (0:00:05.225) 0:02:59.871 *********** 2025-05-13 20:15:23.855648 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:15:23.855655 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:15:23.855661 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:15:23.855668 | orchestrator | 2025-05-13 20:15:23.855674 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:15:23.855681 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-13 20:15:23.855688 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-13 20:15:23.855695 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-13 20:15:23.855701 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-13 20:15:23.855708 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-13 20:15:23.855715 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-13 20:15:23.855721 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-13 20:15:23.855727 | orchestrator | 2025-05-13 20:15:23.855734 | orchestrator | 2025-05-13 20:15:23.855741 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:15:23.855747 | orchestrator | Tuesday 13 May 2025 20:15:22 +0000 (0:00:10.393) 0:03:10.265 *********** 2025-05-13 20:15:23.855754 | orchestrator | =============================================================================== 2025-05-13 20:15:23.855760 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 23.55s 2025-05-13 20:15:23.855773 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 18.12s 2025-05-13 20:15:23.855779 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 16.55s 2025-05-13 20:15:23.855785 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 16.28s 2025-05-13 20:15:23.855792 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 14.97s 2025-05-13 20:15:23.855806 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 11.76s 2025-05-13 20:15:23.855818 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.39s 2025-05-13 20:15:23.855829 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.96s 2025-05-13 20:15:23.855840 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 6.96s 2025-05-13 20:15:23.855851 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.05s 2025-05-13 20:15:23.855863 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.01s 2025-05-13 20:15:23.855870 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.68s 2025-05-13 20:15:23.855877 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.23s 2025-05-13 20:15:23.855883 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.66s 2025-05-13 20:15:23.855890 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.58s 2025-05-13 20:15:23.855896 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 3.25s 2025-05-13 20:15:23.855903 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.18s 2025-05-13 20:15:23.855909 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.56s 2025-05-13 20:15:23.855916 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.24s 2025-05-13 20:15:23.855922 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.08s 2025-05-13 20:15:23.855929 | orchestrator | 2025-05-13 20:15:23 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:15:23.856088 | orchestrator | 2025-05-13 20:15:23 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:15:23.856105 | orchestrator | 2025-05-13 20:15:23 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:15:26.910181 | orchestrator | 2025-05-13 20:15:26 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:15:26.910299 | orchestrator | 2025-05-13 20:15:26 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:15:26.910313 | orchestrator | 2025-05-13 20:15:26 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:15:26.910862 | orchestrator | 2025-05-13 20:15:26 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:15:26.910893 | orchestrator | 2025-05-13 20:15:26 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:15:29.959798 | orchestrator | 2025-05-13 20:15:29 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:15:29.962543 | orchestrator | 2025-05-13 20:15:29 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:15:29.963613 | orchestrator | 2025-05-13 20:15:29 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:15:29.965207 | orchestrator | 2025-05-13 20:15:29 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:15:29.965417 | orchestrator | 2025-05-13 20:15:29 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:15:33.013599 | orchestrator | 2025-05-13 20:15:33 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:15:33.014901 | orchestrator | 2025-05-13 20:15:33 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:15:33.016200 | orchestrator | 2025-05-13 20:15:33 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:15:33.018163 | orchestrator | 2025-05-13 20:15:33 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:15:33.018185 | orchestrator | 2025-05-13 20:15:33 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:15:36.066431 | orchestrator | 2025-05-13 20:15:36 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:15:36.066646 | orchestrator | 2025-05-13 20:15:36 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:15:36.068799 | orchestrator | 2025-05-13 20:15:36 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:15:36.072656 | orchestrator | 2025-05-13 20:15:36 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:15:36.072723 | orchestrator | 2025-05-13 20:15:36 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:15:39.106526 | orchestrator | 2025-05-13 20:15:39 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:15:39.106991 | orchestrator | 2025-05-13 20:15:39 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:15:39.107696 | orchestrator | 2025-05-13 20:15:39 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:15:39.108907 | orchestrator | 2025-05-13 20:15:39 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:15:39.109375 | orchestrator | 2025-05-13 20:15:39 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:15:42.166175 | orchestrator | 2025-05-13 20:15:42 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:15:42.166298 | orchestrator | 2025-05-13 20:15:42 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:15:42.167157 | orchestrator | 2025-05-13 20:15:42 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:15:42.168370 | orchestrator | 2025-05-13 20:15:42 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:15:42.168404 | orchestrator | 2025-05-13 20:15:42 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:15:45.229818 | orchestrator | 2025-05-13 20:15:45 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:15:45.232349 | orchestrator | 2025-05-13 20:15:45 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:15:45.233710 | orchestrator | 2025-05-13 20:15:45 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:15:45.239472 | orchestrator | 2025-05-13 20:15:45 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:15:45.239503 | orchestrator | 2025-05-13 20:15:45 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:15:48.272514 | orchestrator | 2025-05-13 20:15:48 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:15:48.272783 | orchestrator | 2025-05-13 20:15:48 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:15:48.274151 | orchestrator | 2025-05-13 20:15:48 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:15:48.274755 | orchestrator | 2025-05-13 20:15:48 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:15:48.274895 | orchestrator | 2025-05-13 20:15:48 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:15:51.321865 | orchestrator | 2025-05-13 20:15:51 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:15:51.324167 | orchestrator | 2025-05-13 20:15:51 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:15:51.326091 | orchestrator | 2025-05-13 20:15:51 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:15:51.328123 | orchestrator | 2025-05-13 20:15:51 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:15:51.328166 | orchestrator | 2025-05-13 20:15:51 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:15:54.401622 | orchestrator | 2025-05-13 20:15:54 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:15:54.402880 | orchestrator | 2025-05-13 20:15:54 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:15:54.404093 | orchestrator | 2025-05-13 20:15:54 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:15:54.406264 | orchestrator | 2025-05-13 20:15:54 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:15:54.406717 | orchestrator | 2025-05-13 20:15:54 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:15:57.464839 | orchestrator | 2025-05-13 20:15:57 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:15:57.469262 | orchestrator | 2025-05-13 20:15:57 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:15:57.469366 | orchestrator | 2025-05-13 20:15:57 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:15:57.471321 | orchestrator | 2025-05-13 20:15:57 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:15:57.471492 | orchestrator | 2025-05-13 20:15:57 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:16:00.532150 | orchestrator | 2025-05-13 20:16:00 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:16:00.534191 | orchestrator | 2025-05-13 20:16:00 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:16:00.537445 | orchestrator | 2025-05-13 20:16:00 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:16:00.540848 | orchestrator | 2025-05-13 20:16:00 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:16:00.540901 | orchestrator | 2025-05-13 20:16:00 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:16:03.593417 | orchestrator | 2025-05-13 20:16:03 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:16:03.594842 | orchestrator | 2025-05-13 20:16:03 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:16:03.597251 | orchestrator | 2025-05-13 20:16:03 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:16:03.599558 | orchestrator | 2025-05-13 20:16:03 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:16:03.599738 | orchestrator | 2025-05-13 20:16:03 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:16:06.648734 | orchestrator | 2025-05-13 20:16:06 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:16:06.650679 | orchestrator | 2025-05-13 20:16:06 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state STARTED 2025-05-13 20:16:06.654468 | orchestrator | 2025-05-13 20:16:06 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:16:06.656338 | orchestrator | 2025-05-13 20:16:06 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:16:06.656394 | orchestrator | 2025-05-13 20:16:06 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:16:09.707623 | orchestrator | 2025-05-13 20:16:09 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:16:09.712074 | orchestrator | 2025-05-13 20:16:09 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:16:09.713738 | orchestrator | 2025-05-13 20:16:09 | INFO  | Task 71d69509-908c-4d72-af1b-9d48e71ddc4f is in state SUCCESS 2025-05-13 20:16:09.717466 | orchestrator | 2025-05-13 20:16:09.717520 | orchestrator | 2025-05-13 20:16:09.717533 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:16:09.717681 | orchestrator | 2025-05-13 20:16:09.717698 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:16:09.717710 | orchestrator | Tuesday 13 May 2025 20:12:53 +0000 (0:00:00.242) 0:00:00.242 *********** 2025-05-13 20:16:09.717721 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:16:09.717733 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:16:09.717744 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:16:09.717755 | orchestrator | 2025-05-13 20:16:09.717766 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:16:09.717777 | orchestrator | Tuesday 13 May 2025 20:12:53 +0000 (0:00:00.276) 0:00:00.518 *********** 2025-05-13 20:16:09.717788 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-05-13 20:16:09.717799 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-05-13 20:16:09.717810 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-05-13 20:16:09.717820 | orchestrator | 2025-05-13 20:16:09.717831 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-05-13 20:16:09.717842 | orchestrator | 2025-05-13 20:16:09.717852 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-13 20:16:09.717863 | orchestrator | Tuesday 13 May 2025 20:12:54 +0000 (0:00:00.359) 0:00:00.878 *********** 2025-05-13 20:16:09.717873 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:16:09.717884 | orchestrator | 2025-05-13 20:16:09.717922 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-05-13 20:16:09.717934 | orchestrator | Tuesday 13 May 2025 20:12:54 +0000 (0:00:00.501) 0:00:01.380 *********** 2025-05-13 20:16:09.717944 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-05-13 20:16:09.717955 | orchestrator | 2025-05-13 20:16:09.717966 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-05-13 20:16:09.717976 | orchestrator | Tuesday 13 May 2025 20:13:07 +0000 (0:00:12.301) 0:00:13.682 *********** 2025-05-13 20:16:09.717987 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-05-13 20:16:09.717998 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-05-13 20:16:09.718008 | orchestrator | 2025-05-13 20:16:09.718066 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-05-13 20:16:09.718080 | orchestrator | Tuesday 13 May 2025 20:13:13 +0000 (0:00:06.221) 0:00:19.904 *********** 2025-05-13 20:16:09.718090 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-05-13 20:16:09.718100 | orchestrator | 2025-05-13 20:16:09.718111 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-05-13 20:16:09.718122 | orchestrator | Tuesday 13 May 2025 20:13:16 +0000 (0:00:03.101) 0:00:23.005 *********** 2025-05-13 20:16:09.718134 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-13 20:16:09.718145 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-05-13 20:16:09.718183 | orchestrator | 2025-05-13 20:16:09.718195 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-05-13 20:16:09.718289 | orchestrator | Tuesday 13 May 2025 20:13:20 +0000 (0:00:04.234) 0:00:27.240 *********** 2025-05-13 20:16:09.718302 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-13 20:16:09.718428 | orchestrator | 2025-05-13 20:16:09.718443 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-05-13 20:16:09.718456 | orchestrator | Tuesday 13 May 2025 20:13:24 +0000 (0:00:03.617) 0:00:30.857 *********** 2025-05-13 20:16:09.718470 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-05-13 20:16:09.718482 | orchestrator | 2025-05-13 20:16:09.718495 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-05-13 20:16:09.718509 | orchestrator | Tuesday 13 May 2025 20:13:27 +0000 (0:00:03.645) 0:00:34.502 *********** 2025-05-13 20:16:09.718547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 20:16:09.718567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 20:16:09.718595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 20:16:09.718609 | orchestrator | 2025-05-13 20:16:09.718623 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-13 20:16:09.718635 | orchestrator | Tuesday 13 May 2025 20:13:34 +0000 (0:00:06.306) 0:00:40.809 *********** 2025-05-13 20:16:09.718649 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:16:09.718662 | orchestrator | 2025-05-13 20:16:09.718685 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-05-13 20:16:09.718697 | orchestrator | Tuesday 13 May 2025 20:13:34 +0000 (0:00:00.743) 0:00:41.553 *********** 2025-05-13 20:16:09.718709 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:16:09.718721 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:16:09.718733 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:16:09.718744 | orchestrator | 2025-05-13 20:16:09.718756 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-05-13 20:16:09.718767 | orchestrator | Tuesday 13 May 2025 20:13:40 +0000 (0:00:05.902) 0:00:47.455 *********** 2025-05-13 20:16:09.718779 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-13 20:16:09.718791 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-13 20:16:09.718802 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-13 20:16:09.718814 | orchestrator | 2025-05-13 20:16:09.718825 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-05-13 20:16:09.718836 | orchestrator | Tuesday 13 May 2025 20:13:42 +0000 (0:00:01.776) 0:00:49.232 *********** 2025-05-13 20:16:09.718848 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-13 20:16:09.718860 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-13 20:16:09.718879 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-13 20:16:09.718891 | orchestrator | 2025-05-13 20:16:09.718927 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-05-13 20:16:09.718938 | orchestrator | Tuesday 13 May 2025 20:13:43 +0000 (0:00:01.154) 0:00:50.386 *********** 2025-05-13 20:16:09.718949 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:16:09.718960 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:16:09.718970 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:16:09.718981 | orchestrator | 2025-05-13 20:16:09.718991 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-05-13 20:16:09.719002 | orchestrator | Tuesday 13 May 2025 20:13:44 +0000 (0:00:00.819) 0:00:51.206 *********** 2025-05-13 20:16:09.719013 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:16:09.719023 | orchestrator | 2025-05-13 20:16:09.719034 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-05-13 20:16:09.719045 | orchestrator | Tuesday 13 May 2025 20:13:44 +0000 (0:00:00.131) 0:00:51.337 *********** 2025-05-13 20:16:09.719055 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:16:09.719066 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:16:09.719077 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:16:09.719087 | orchestrator | 2025-05-13 20:16:09.719098 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-13 20:16:09.719109 | orchestrator | Tuesday 13 May 2025 20:13:45 +0000 (0:00:00.306) 0:00:51.643 *********** 2025-05-13 20:16:09.719120 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:16:09.719130 | orchestrator | 2025-05-13 20:16:09.719141 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-05-13 20:16:09.719152 | orchestrator | Tuesday 13 May 2025 20:13:45 +0000 (0:00:00.570) 0:00:52.214 *********** 2025-05-13 20:16:09.719171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 20:16:09.719185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 20:16:09.719206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 20:16:09.719219 | orchestrator | 2025-05-13 20:16:09.719230 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-05-13 20:16:09.719240 | orchestrator | Tuesday 13 May 2025 20:13:50 +0000 (0:00:04.815) 0:00:57.030 *********** 2025-05-13 20:16:09.719261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-13 20:16:09.719282 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:16:09.719295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-13 20:16:09.719307 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:16:09.719328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-13 20:16:09.719348 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:16:09.719359 | orchestrator | 2025-05-13 20:16:09.719370 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-05-13 20:16:09.719380 | orchestrator | Tuesday 13 May 2025 20:13:53 +0000 (0:00:03.183) 0:01:00.214 *********** 2025-05-13 20:16:09.719392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-13 20:16:09.719404 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:16:09.719423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-13 20:16:09.719442 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:16:09.719455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-13 20:16:09.719467 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:16:09.719477 | orchestrator | 2025-05-13 20:16:09.719488 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-05-13 20:16:09.719499 | orchestrator | Tuesday 13 May 2025 20:13:59 +0000 (0:00:05.643) 0:01:05.857 *********** 2025-05-13 20:16:09.719510 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:16:09.719521 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:16:09.719532 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:16:09.719542 | orchestrator | 2025-05-13 20:16:09.719553 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-05-13 20:16:09.719564 | orchestrator | Tuesday 13 May 2025 20:14:08 +0000 (0:00:09.589) 0:01:15.446 *********** 2025-05-13 20:16:09.719581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 20:16:09.719600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 20:16:09.719613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 20:16:09.719625 | orchestrator | 2025-05-13 20:16:09.719636 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-05-13 20:16:09.719654 | orchestrator | Tuesday 13 May 2025 20:14:13 +0000 (0:00:05.012) 0:01:20.458 *********** 2025-05-13 20:16:09.719673 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:16:09.719691 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:16:09.719710 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:16:09.719728 | orchestrator | 2025-05-13 20:16:09.719746 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-05-13 20:16:09.719764 | orchestrator | Tuesday 13 May 2025 20:14:20 +0000 (0:00:07.071) 0:01:27.529 *********** 2025-05-13 20:16:09.719783 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:16:09.719802 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:16:09.719821 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:16:09.719839 | orchestrator | 2025-05-13 20:16:09.719857 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-05-13 20:16:09.719885 | orchestrator | Tuesday 13 May 2025 20:14:26 +0000 (0:00:05.322) 0:01:32.852 *********** 2025-05-13 20:16:09.719940 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:16:09.719953 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:16:09.719964 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:16:09.719975 | orchestrator | 2025-05-13 20:16:09.719986 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-05-13 20:16:09.719996 | orchestrator | Tuesday 13 May 2025 20:14:30 +0000 (0:00:03.982) 0:01:36.834 *********** 2025-05-13 20:16:09.720007 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:16:09.720018 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:16:09.720028 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:16:09.720039 | orchestrator | 2025-05-13 20:16:09.720049 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-05-13 20:16:09.720060 | orchestrator | Tuesday 13 May 2025 20:14:34 +0000 (0:00:03.900) 0:01:40.735 *********** 2025-05-13 20:16:09.720070 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:16:09.720081 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:16:09.720091 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:16:09.720102 | orchestrator | 2025-05-13 20:16:09.720112 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-05-13 20:16:09.720123 | orchestrator | Tuesday 13 May 2025 20:14:44 +0000 (0:00:10.207) 0:01:50.942 *********** 2025-05-13 20:16:09.720133 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:16:09.720144 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:16:09.720154 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:16:09.720165 | orchestrator | 2025-05-13 20:16:09.720175 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-05-13 20:16:09.720186 | orchestrator | Tuesday 13 May 2025 20:14:45 +0000 (0:00:00.813) 0:01:51.755 *********** 2025-05-13 20:16:09.720197 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-13 20:16:09.720208 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:16:09.720218 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-13 20:16:09.720229 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:16:09.720240 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-13 20:16:09.720250 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:16:09.720261 | orchestrator | 2025-05-13 20:16:09.720272 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-05-13 20:16:09.720282 | orchestrator | Tuesday 13 May 2025 20:14:49 +0000 (0:00:04.564) 0:01:56.320 *********** 2025-05-13 20:16:09.720295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 20:16:09.720328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 20:16:09.720342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-13 20:16:09.720361 | orchestrator | 2025-05-13 20:16:09.720372 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-13 20:16:09.720383 | orchestrator | Tuesday 13 May 2025 20:14:54 +0000 (0:00:04.386) 0:02:00.707 *********** 2025-05-13 20:16:09.720393 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:16:09.720404 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:16:09.720415 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:16:09.720425 | orchestrator | 2025-05-13 20:16:09.720436 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-05-13 20:16:09.720447 | orchestrator | Tuesday 13 May 2025 20:14:54 +0000 (0:00:00.342) 0:02:01.049 *********** 2025-05-13 20:16:09.720457 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:16:09.720468 | orchestrator | 2025-05-13 20:16:09.720478 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-05-13 20:16:09.720489 | orchestrator | Tuesday 13 May 2025 20:14:56 +0000 (0:00:02.155) 0:02:03.204 *********** 2025-05-13 20:16:09.720500 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:16:09.720510 | orchestrator | 2025-05-13 20:16:09.720521 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-05-13 20:16:09.720532 | orchestrator | Tuesday 13 May 2025 20:14:58 +0000 (0:00:02.058) 0:02:05.263 *********** 2025-05-13 20:16:09.720542 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:16:09.720553 | orchestrator | 2025-05-13 20:16:09.720563 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-05-13 20:16:09.720574 | orchestrator | Tuesday 13 May 2025 20:15:00 +0000 (0:00:02.171) 0:02:07.434 *********** 2025-05-13 20:16:09.720585 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:16:09.720595 | orchestrator | 2025-05-13 20:16:09.720606 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-05-13 20:16:09.720616 | orchestrator | Tuesday 13 May 2025 20:15:29 +0000 (0:00:28.418) 0:02:35.853 *********** 2025-05-13 20:16:09.720627 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:16:09.720638 | orchestrator | 2025-05-13 20:16:09.720655 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-13 20:16:09.720666 | orchestrator | Tuesday 13 May 2025 20:15:31 +0000 (0:00:02.567) 0:02:38.421 *********** 2025-05-13 20:16:09.720676 | orchestrator | 2025-05-13 20:16:09.720687 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-13 20:16:09.720698 | orchestrator | Tuesday 13 May 2025 20:15:31 +0000 (0:00:00.065) 0:02:38.486 *********** 2025-05-13 20:16:09.720708 | orchestrator | 2025-05-13 20:16:09.720719 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-13 20:16:09.720730 | orchestrator | Tuesday 13 May 2025 20:15:31 +0000 (0:00:00.065) 0:02:38.552 *********** 2025-05-13 20:16:09.720741 | orchestrator | 2025-05-13 20:16:09.720752 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-05-13 20:16:09.720764 | orchestrator | Tuesday 13 May 2025 20:15:31 +0000 (0:00:00.065) 0:02:38.617 *********** 2025-05-13 20:16:09.720783 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:16:09.720801 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:16:09.720819 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:16:09.720836 | orchestrator | 2025-05-13 20:16:09.720853 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:16:09.720873 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-13 20:16:09.720928 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-13 20:16:09.720947 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-13 20:16:09.720965 | orchestrator | 2025-05-13 20:16:09.720984 | orchestrator | 2025-05-13 20:16:09.721002 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:16:09.721020 | orchestrator | Tuesday 13 May 2025 20:16:05 +0000 (0:00:33.857) 0:03:12.475 *********** 2025-05-13 20:16:09.721040 | orchestrator | =============================================================================== 2025-05-13 20:16:09.721059 | orchestrator | glance : Restart glance-api container ---------------------------------- 33.86s 2025-05-13 20:16:09.721078 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.42s 2025-05-13 20:16:09.721098 | orchestrator | service-ks-register : glance | Creating services ----------------------- 12.30s 2025-05-13 20:16:09.721119 | orchestrator | glance : Copying over property-protections-rules.conf ------------------ 10.21s 2025-05-13 20:16:09.721140 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 9.59s 2025-05-13 20:16:09.721158 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.07s 2025-05-13 20:16:09.721178 | orchestrator | glance : Ensuring config directories exist ------------------------------ 6.31s 2025-05-13 20:16:09.721198 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.22s 2025-05-13 20:16:09.721218 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 5.90s 2025-05-13 20:16:09.721238 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 5.64s 2025-05-13 20:16:09.721258 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.32s 2025-05-13 20:16:09.721279 | orchestrator | glance : Copying over config.json files for services -------------------- 5.01s 2025-05-13 20:16:09.721297 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.82s 2025-05-13 20:16:09.721316 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.56s 2025-05-13 20:16:09.721333 | orchestrator | glance : Check glance containers ---------------------------------------- 4.39s 2025-05-13 20:16:09.721354 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.23s 2025-05-13 20:16:09.721374 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.98s 2025-05-13 20:16:09.721394 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.90s 2025-05-13 20:16:09.721412 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.65s 2025-05-13 20:16:09.721430 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.62s 2025-05-13 20:16:09.721448 | orchestrator | 2025-05-13 20:16:09 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:16:09.722718 | orchestrator | 2025-05-13 20:16:09 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:16:09.722877 | orchestrator | 2025-05-13 20:16:09 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:16:12.795608 | orchestrator | 2025-05-13 20:16:12 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:16:12.795947 | orchestrator | 2025-05-13 20:16:12 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:16:12.797095 | orchestrator | 2025-05-13 20:16:12 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:16:12.797877 | orchestrator | 2025-05-13 20:16:12 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:16:12.798122 | orchestrator | 2025-05-13 20:16:12 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:16:15.844067 | orchestrator | 2025-05-13 20:16:15 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:16:15.845047 | orchestrator | 2025-05-13 20:16:15 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:16:15.846867 | orchestrator | 2025-05-13 20:16:15 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:16:15.848339 | orchestrator | 2025-05-13 20:16:15 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:16:15.848367 | orchestrator | 2025-05-13 20:16:15 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:16:18.884630 | orchestrator | 2025-05-13 20:16:18 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:16:18.884731 | orchestrator | 2025-05-13 20:16:18 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:16:18.886370 | orchestrator | 2025-05-13 20:16:18 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:16:18.887098 | orchestrator | 2025-05-13 20:16:18 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:16:18.887127 | orchestrator | 2025-05-13 20:16:18 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:16:21.926268 | orchestrator | 2025-05-13 20:16:21 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:16:21.928972 | orchestrator | 2025-05-13 20:16:21 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:16:21.932544 | orchestrator | 2025-05-13 20:16:21 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:16:21.933124 | orchestrator | 2025-05-13 20:16:21 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:16:21.933147 | orchestrator | 2025-05-13 20:16:21 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:16:24.993165 | orchestrator | 2025-05-13 20:16:24 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:16:24.993270 | orchestrator | 2025-05-13 20:16:24 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:16:24.993285 | orchestrator | 2025-05-13 20:16:24 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:16:24.993297 | orchestrator | 2025-05-13 20:16:24 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:16:24.993308 | orchestrator | 2025-05-13 20:16:24 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:16:28.041851 | orchestrator | 2025-05-13 20:16:28 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:16:28.042411 | orchestrator | 2025-05-13 20:16:28 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:16:28.043509 | orchestrator | 2025-05-13 20:16:28 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:16:28.044480 | orchestrator | 2025-05-13 20:16:28 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:16:28.044510 | orchestrator | 2025-05-13 20:16:28 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:16:31.085341 | orchestrator | 2025-05-13 20:16:31 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:16:31.087067 | orchestrator | 2025-05-13 20:16:31 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:16:31.088063 | orchestrator | 2025-05-13 20:16:31 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:16:31.088711 | orchestrator | 2025-05-13 20:16:31 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:16:31.088741 | orchestrator | 2025-05-13 20:16:31 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:16:34.130625 | orchestrator | 2025-05-13 20:16:34 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:16:34.131105 | orchestrator | 2025-05-13 20:16:34 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:16:34.131756 | orchestrator | 2025-05-13 20:16:34 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:16:34.132446 | orchestrator | 2025-05-13 20:16:34 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:16:34.132477 | orchestrator | 2025-05-13 20:16:34 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:16:37.183442 | orchestrator | 2025-05-13 20:16:37 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:16:37.185070 | orchestrator | 2025-05-13 20:16:37 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:16:37.186106 | orchestrator | 2025-05-13 20:16:37 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:16:37.188269 | orchestrator | 2025-05-13 20:16:37 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:16:37.188346 | orchestrator | 2025-05-13 20:16:37 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:16:40.232386 | orchestrator | 2025-05-13 20:16:40 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:16:40.232494 | orchestrator | 2025-05-13 20:16:40 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:16:40.235167 | orchestrator | 2025-05-13 20:16:40 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:16:40.237608 | orchestrator | 2025-05-13 20:16:40 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:16:40.237681 | orchestrator | 2025-05-13 20:16:40 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:16:43.279736 | orchestrator | 2025-05-13 20:16:43 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:16:43.280061 | orchestrator | 2025-05-13 20:16:43 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:16:43.280077 | orchestrator | 2025-05-13 20:16:43 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:16:43.280103 | orchestrator | 2025-05-13 20:16:43 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:16:43.282330 | orchestrator | 2025-05-13 20:16:43 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:16:46.327659 | orchestrator | 2025-05-13 20:16:46 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:16:46.330808 | orchestrator | 2025-05-13 20:16:46 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:16:46.332288 | orchestrator | 2025-05-13 20:16:46 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:16:46.333357 | orchestrator | 2025-05-13 20:16:46 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:16:46.333461 | orchestrator | 2025-05-13 20:16:46 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:16:49.372483 | orchestrator | 2025-05-13 20:16:49 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:16:49.373276 | orchestrator | 2025-05-13 20:16:49 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:16:49.374144 | orchestrator | 2025-05-13 20:16:49 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:16:49.375023 | orchestrator | 2025-05-13 20:16:49 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:16:49.375050 | orchestrator | 2025-05-13 20:16:49 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:16:52.424628 | orchestrator | 2025-05-13 20:16:52 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:16:52.425062 | orchestrator | 2025-05-13 20:16:52 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:16:52.427429 | orchestrator | 2025-05-13 20:16:52 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:16:52.428239 | orchestrator | 2025-05-13 20:16:52 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:16:52.428295 | orchestrator | 2025-05-13 20:16:52 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:16:55.474392 | orchestrator | 2025-05-13 20:16:55 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:16:55.475128 | orchestrator | 2025-05-13 20:16:55 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:16:55.476199 | orchestrator | 2025-05-13 20:16:55 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:16:55.477221 | orchestrator | 2025-05-13 20:16:55 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:16:55.477407 | orchestrator | 2025-05-13 20:16:55 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:16:58.528145 | orchestrator | 2025-05-13 20:16:58 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:16:58.529094 | orchestrator | 2025-05-13 20:16:58 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:16:58.530186 | orchestrator | 2025-05-13 20:16:58 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:16:58.531550 | orchestrator | 2025-05-13 20:16:58 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:16:58.531715 | orchestrator | 2025-05-13 20:16:58 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:17:01.585787 | orchestrator | 2025-05-13 20:17:01 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:17:01.586902 | orchestrator | 2025-05-13 20:17:01 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:17:01.588168 | orchestrator | 2025-05-13 20:17:01 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:17:01.589025 | orchestrator | 2025-05-13 20:17:01 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:17:01.589639 | orchestrator | 2025-05-13 20:17:01 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:17:04.627898 | orchestrator | 2025-05-13 20:17:04 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:17:04.632920 | orchestrator | 2025-05-13 20:17:04 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:17:04.632992 | orchestrator | 2025-05-13 20:17:04 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:17:04.633007 | orchestrator | 2025-05-13 20:17:04 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:17:04.633049 | orchestrator | 2025-05-13 20:17:04 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:17:07.675233 | orchestrator | 2025-05-13 20:17:07 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:17:07.678136 | orchestrator | 2025-05-13 20:17:07 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:17:07.678478 | orchestrator | 2025-05-13 20:17:07 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:17:07.679229 | orchestrator | 2025-05-13 20:17:07 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:17:07.679366 | orchestrator | 2025-05-13 20:17:07 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:17:10.725121 | orchestrator | 2025-05-13 20:17:10 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:17:10.725170 | orchestrator | 2025-05-13 20:17:10 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:17:10.725929 | orchestrator | 2025-05-13 20:17:10 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:17:10.726778 | orchestrator | 2025-05-13 20:17:10 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:17:10.726813 | orchestrator | 2025-05-13 20:17:10 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:17:13.767547 | orchestrator | 2025-05-13 20:17:13 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:17:13.768515 | orchestrator | 2025-05-13 20:17:13 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:17:13.769049 | orchestrator | 2025-05-13 20:17:13 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:17:13.769992 | orchestrator | 2025-05-13 20:17:13 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:17:13.770066 | orchestrator | 2025-05-13 20:17:13 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:17:16.803306 | orchestrator | 2025-05-13 20:17:16 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:17:16.803491 | orchestrator | 2025-05-13 20:17:16 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:17:16.804089 | orchestrator | 2025-05-13 20:17:16 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:17:16.804863 | orchestrator | 2025-05-13 20:17:16 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:17:16.804887 | orchestrator | 2025-05-13 20:17:16 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:17:19.837659 | orchestrator | 2025-05-13 20:17:19 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:17:19.837762 | orchestrator | 2025-05-13 20:17:19 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:17:19.838196 | orchestrator | 2025-05-13 20:17:19 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:17:19.838823 | orchestrator | 2025-05-13 20:17:19 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:17:19.838960 | orchestrator | 2025-05-13 20:17:19 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:17:22.873457 | orchestrator | 2025-05-13 20:17:22 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:17:22.873571 | orchestrator | 2025-05-13 20:17:22 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:17:22.874006 | orchestrator | 2025-05-13 20:17:22 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:17:22.874507 | orchestrator | 2025-05-13 20:17:22 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:17:22.874536 | orchestrator | 2025-05-13 20:17:22 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:17:25.901964 | orchestrator | 2025-05-13 20:17:25 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:17:25.902549 | orchestrator | 2025-05-13 20:17:25 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:17:25.903124 | orchestrator | 2025-05-13 20:17:25 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:17:25.905207 | orchestrator | 2025-05-13 20:17:25 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:17:25.905254 | orchestrator | 2025-05-13 20:17:25 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:17:28.941934 | orchestrator | 2025-05-13 20:17:28 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:17:28.943051 | orchestrator | 2025-05-13 20:17:28 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:17:28.946559 | orchestrator | 2025-05-13 20:17:28 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state STARTED 2025-05-13 20:17:28.947199 | orchestrator | 2025-05-13 20:17:28 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:17:28.947224 | orchestrator | 2025-05-13 20:17:28 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:17:31.983959 | orchestrator | 2025-05-13 20:17:31 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:17:31.984241 | orchestrator | 2025-05-13 20:17:31 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:17:31.985787 | orchestrator | 2025-05-13 20:17:31 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:17:31.987233 | orchestrator | 2025-05-13 20:17:31 | INFO  | Task 23c582fb-0003-4878-bf5f-0962d0222b3c is in state SUCCESS 2025-05-13 20:17:31.989262 | orchestrator | 2025-05-13 20:17:31.989328 | orchestrator | 2025-05-13 20:17:31.989349 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:17:31.989368 | orchestrator | 2025-05-13 20:17:31.990515 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:17:31.990579 | orchestrator | Tuesday 13 May 2025 20:13:24 +0000 (0:00:00.630) 0:00:00.631 *********** 2025-05-13 20:17:31.990592 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:17:31.990604 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:17:31.990615 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:17:31.990625 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:17:31.990635 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:17:31.990647 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:17:31.990664 | orchestrator | 2025-05-13 20:17:31.990682 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:17:31.990694 | orchestrator | Tuesday 13 May 2025 20:13:24 +0000 (0:00:00.709) 0:00:01.340 *********** 2025-05-13 20:17:31.990711 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-05-13 20:17:31.990730 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-05-13 20:17:31.990747 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-05-13 20:17:31.990766 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-05-13 20:17:31.990783 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-05-13 20:17:31.990800 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-05-13 20:17:31.990819 | orchestrator | 2025-05-13 20:17:31.990838 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-05-13 20:17:31.990919 | orchestrator | 2025-05-13 20:17:31.990943 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-13 20:17:31.990962 | orchestrator | Tuesday 13 May 2025 20:13:25 +0000 (0:00:00.637) 0:00:01.978 *********** 2025-05-13 20:17:31.990981 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:17:31.991000 | orchestrator | 2025-05-13 20:17:31.991018 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-05-13 20:17:31.991032 | orchestrator | Tuesday 13 May 2025 20:13:27 +0000 (0:00:01.515) 0:00:03.494 *********** 2025-05-13 20:17:31.991052 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-05-13 20:17:31.991069 | orchestrator | 2025-05-13 20:17:31.991087 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-05-13 20:17:31.991102 | orchestrator | Tuesday 13 May 2025 20:13:30 +0000 (0:00:03.096) 0:00:06.590 *********** 2025-05-13 20:17:31.991120 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-05-13 20:17:31.991140 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-05-13 20:17:31.991158 | orchestrator | 2025-05-13 20:17:31.991176 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-05-13 20:17:31.991196 | orchestrator | Tuesday 13 May 2025 20:13:36 +0000 (0:00:06.117) 0:00:12.707 *********** 2025-05-13 20:17:31.991214 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-13 20:17:31.991233 | orchestrator | 2025-05-13 20:17:31.991258 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-05-13 20:17:31.991277 | orchestrator | Tuesday 13 May 2025 20:13:39 +0000 (0:00:03.076) 0:00:15.783 *********** 2025-05-13 20:17:31.991294 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-13 20:17:31.991312 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-05-13 20:17:31.991331 | orchestrator | 2025-05-13 20:17:31.991349 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-05-13 20:17:31.991364 | orchestrator | Tuesday 13 May 2025 20:13:43 +0000 (0:00:03.814) 0:00:19.598 *********** 2025-05-13 20:17:31.991375 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-13 20:17:31.991386 | orchestrator | 2025-05-13 20:17:31.991397 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-05-13 20:17:31.991407 | orchestrator | Tuesday 13 May 2025 20:13:46 +0000 (0:00:03.144) 0:00:22.743 *********** 2025-05-13 20:17:31.991418 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-05-13 20:17:31.991429 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-05-13 20:17:31.991440 | orchestrator | 2025-05-13 20:17:31.991451 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-05-13 20:17:31.991461 | orchestrator | Tuesday 13 May 2025 20:13:53 +0000 (0:00:07.658) 0:00:30.402 *********** 2025-05-13 20:17:31.991476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 20:17:31.991578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 20:17:31.991594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 20:17:31.991607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.991619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.991634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.991703 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.991738 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.991756 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.991769 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.991780 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.991792 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.991811 | orchestrator | 2025-05-13 20:17:31.991883 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-13 20:17:31.991899 | orchestrator | Tuesday 13 May 2025 20:13:57 +0000 (0:00:03.741) 0:00:34.144 *********** 2025-05-13 20:17:31.991910 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:17:31.991921 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:17:31.991931 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:17:31.991943 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:17:31.991954 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:17:31.991964 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:17:31.991975 | orchestrator | 2025-05-13 20:17:31.991986 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-13 20:17:31.991997 | orchestrator | Tuesday 13 May 2025 20:13:58 +0000 (0:00:00.907) 0:00:35.051 *********** 2025-05-13 20:17:31.992007 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:17:31.992018 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:17:31.992028 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:17:31.992039 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:17:31.992050 | orchestrator | 2025-05-13 20:17:31.992061 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-05-13 20:17:31.992072 | orchestrator | Tuesday 13 May 2025 20:14:00 +0000 (0:00:01.554) 0:00:36.606 *********** 2025-05-13 20:17:31.992082 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-05-13 20:17:31.992093 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-05-13 20:17:31.992103 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-05-13 20:17:31.992114 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-05-13 20:17:31.992125 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-05-13 20:17:31.992135 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-05-13 20:17:31.992146 | orchestrator | 2025-05-13 20:17:31.992156 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-05-13 20:17:31.992167 | orchestrator | Tuesday 13 May 2025 20:14:04 +0000 (0:00:04.365) 0:00:40.971 *********** 2025-05-13 20:17:31.992180 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-13 20:17:31.992193 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-13 20:17:31.992217 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-13 20:17:31.992262 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-13 20:17:31.992281 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-13 20:17:31.992302 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-13 20:17:31.992322 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-13 20:17:31.992356 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-13 20:17:31.992421 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-13 20:17:31.992436 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-13 20:17:31.992448 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-13 20:17:31.992460 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-13 20:17:31.992480 | orchestrator | 2025-05-13 20:17:31.992491 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-05-13 20:17:31.992502 | orchestrator | Tuesday 13 May 2025 20:14:10 +0000 (0:00:05.796) 0:00:46.768 *********** 2025-05-13 20:17:31.992513 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-13 20:17:31.992525 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-13 20:17:31.992536 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-13 20:17:31.992547 | orchestrator | 2025-05-13 20:17:31.992558 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-05-13 20:17:31.992568 | orchestrator | Tuesday 13 May 2025 20:14:12 +0000 (0:00:02.442) 0:00:49.210 *********** 2025-05-13 20:17:31.992579 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-05-13 20:17:31.992590 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-05-13 20:17:31.992600 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-05-13 20:17:31.992611 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-05-13 20:17:31.992622 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-05-13 20:17:31.992662 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-05-13 20:17:31.992674 | orchestrator | 2025-05-13 20:17:31.992685 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-05-13 20:17:31.992696 | orchestrator | Tuesday 13 May 2025 20:14:15 +0000 (0:00:02.991) 0:00:52.202 *********** 2025-05-13 20:17:31.992708 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-05-13 20:17:31.992719 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-05-13 20:17:31.992730 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-05-13 20:17:31.992741 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-05-13 20:17:31.992752 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-05-13 20:17:31.992762 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-05-13 20:17:31.992773 | orchestrator | 2025-05-13 20:17:31.992784 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-05-13 20:17:31.992795 | orchestrator | Tuesday 13 May 2025 20:14:16 +0000 (0:00:01.098) 0:00:53.300 *********** 2025-05-13 20:17:31.992805 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:17:31.992816 | orchestrator | 2025-05-13 20:17:31.992827 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-05-13 20:17:31.992838 | orchestrator | Tuesday 13 May 2025 20:14:17 +0000 (0:00:00.297) 0:00:53.598 *********** 2025-05-13 20:17:31.992849 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:17:31.992910 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:17:31.992922 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:17:31.992932 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:17:31.992943 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:17:31.992953 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:17:31.992964 | orchestrator | 2025-05-13 20:17:31.992975 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-13 20:17:31.992986 | orchestrator | Tuesday 13 May 2025 20:14:19 +0000 (0:00:01.852) 0:00:55.451 *********** 2025-05-13 20:17:31.992997 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:17:31.993020 | orchestrator | 2025-05-13 20:17:31.993030 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-05-13 20:17:31.993041 | orchestrator | Tuesday 13 May 2025 20:14:20 +0000 (0:00:01.735) 0:00:57.186 *********** 2025-05-13 20:17:31.993053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 20:17:31.993065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 20:17:31.993111 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.993125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 20:17:31.993137 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.993156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.993168 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.993180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.993223 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.993237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.993255 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.993267 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.993287 | orchestrator | 2025-05-13 20:17:31.993305 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-05-13 20:17:31.993324 | orchestrator | Tuesday 13 May 2025 20:14:24 +0000 (0:00:03.459) 0:01:00.646 *********** 2025-05-13 20:17:31.993373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-13 20:17:31.993402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.993415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-13 20:17:31.993434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.993446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-13 20:17:31.993457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.993469 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:17:31.993480 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:17:31.993490 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:17:31.993502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.993524 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.993535 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:17:31.993554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.993565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.993576 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:17:31.993588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.993599 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.993611 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:17:31.993622 | orchestrator | 2025-05-13 20:17:31.993632 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-05-13 20:17:31.993643 | orchestrator | Tuesday 13 May 2025 20:14:26 +0000 (0:00:02.087) 0:01:02.733 *********** 2025-05-13 20:17:31.993662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-13 20:17:31.993680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.993692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-13 20:17:31.993704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.993715 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:17:31.993732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.993753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.993775 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:17:31.993787 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:17:31.993798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-13 20:17:31.993810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.993821 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:17:31.993832 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.993844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.993884 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:17:31.993904 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.993930 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.993941 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:17:31.993957 | orchestrator | 2025-05-13 20:17:31.993975 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-05-13 20:17:31.993986 | orchestrator | Tuesday 13 May 2025 20:14:28 +0000 (0:00:01.964) 0:01:04.697 *********** 2025-05-13 20:17:31.993998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 20:17:31.994010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 20:17:31.994061 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.994091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 20:17:31.994103 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.994115 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.994126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.994137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.994155 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.994174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.994185 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.994197 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.994208 | orchestrator | 2025-05-13 20:17:31.994218 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-05-13 20:17:31.994229 | orchestrator | Tuesday 13 May 2025 20:14:31 +0000 (0:00:02.906) 0:01:07.604 *********** 2025-05-13 20:17:31.994240 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-13 20:17:31.994251 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:17:31.994262 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-13 20:17:31.994276 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:17:31.994295 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-13 20:17:31.994313 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:17:31.994331 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-13 20:17:31.994350 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-13 20:17:31.994369 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-13 20:17:31.994386 | orchestrator | 2025-05-13 20:17:31.994406 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-05-13 20:17:31.994424 | orchestrator | Tuesday 13 May 2025 20:14:33 +0000 (0:00:02.228) 0:01:09.832 *********** 2025-05-13 20:17:31.994452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 20:17:31.994473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 20:17:31.994485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 20:17:31.994498 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.994509 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.994549 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.994562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.994573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.994585 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.994596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.994608 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.994626 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.994638 | orchestrator | 2025-05-13 20:17:31.994648 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-05-13 20:17:31.994659 | orchestrator | Tuesday 13 May 2025 20:14:47 +0000 (0:00:14.184) 0:01:24.017 *********** 2025-05-13 20:17:31.994677 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:17:31.994696 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:17:31.994720 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:17:31.994744 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:17:31.994763 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:17:31.994781 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:17:31.994798 | orchestrator | 2025-05-13 20:17:31.994815 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-05-13 20:17:31.994834 | orchestrator | Tuesday 13 May 2025 20:14:50 +0000 (0:00:02.501) 0:01:26.519 *********** 2025-05-13 20:17:31.995035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-13 20:17:31.995135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.995148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-13 20:17:31.995173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.995185 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:17:31.995196 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:17:31.995250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-13 20:17:31.995263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.995286 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:17:31.995308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.995320 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.995339 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:17:31.995350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.995362 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.995373 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:17:31.995392 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.995404 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-13 20:17:31.995415 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:17:31.995426 | orchestrator | 2025-05-13 20:17:31.995437 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-05-13 20:17:31.995448 | orchestrator | Tuesday 13 May 2025 20:14:51 +0000 (0:00:01.539) 0:01:28.058 *********** 2025-05-13 20:17:31.995459 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:17:31.995492 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:17:31.995500 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:17:31.995508 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:17:31.995516 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:17:31.995524 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:17:31.995531 | orchestrator | 2025-05-13 20:17:31.995539 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-05-13 20:17:31.995547 | orchestrator | Tuesday 13 May 2025 20:14:52 +0000 (0:00:01.213) 0:01:29.272 *********** 2025-05-13 20:17:31.995555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 20:17:31.995564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 20:17:31.995581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-13 20:17:31.995589 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.995603 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.995611 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.995619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.995634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.995643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.995651 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.995665 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.995673 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-13 20:17:31.995681 | orchestrator | 2025-05-13 20:17:31.995689 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-13 20:17:31.995697 | orchestrator | Tuesday 13 May 2025 20:14:55 +0000 (0:00:02.288) 0:01:31.560 *********** 2025-05-13 20:17:31.995705 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:17:31.995713 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:17:31.995720 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:17:31.995728 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:17:31.995736 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:17:31.995744 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:17:31.995752 | orchestrator | 2025-05-13 20:17:31.995760 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-05-13 20:17:31.995768 | orchestrator | Tuesday 13 May 2025 20:14:55 +0000 (0:00:00.819) 0:01:32.379 *********** 2025-05-13 20:17:31.995775 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:17:31.995783 | orchestrator | 2025-05-13 20:17:31.995791 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-05-13 20:17:31.995799 | orchestrator | Tuesday 13 May 2025 20:14:57 +0000 (0:00:02.028) 0:01:34.408 *********** 2025-05-13 20:17:31.995807 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:17:31.995815 | orchestrator | 2025-05-13 20:17:31.995822 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-05-13 20:17:31.995830 | orchestrator | Tuesday 13 May 2025 20:15:00 +0000 (0:00:02.112) 0:01:36.520 *********** 2025-05-13 20:17:31.995838 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:17:31.995846 | orchestrator | 2025-05-13 20:17:31.995891 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-13 20:17:31.995900 | orchestrator | Tuesday 13 May 2025 20:15:19 +0000 (0:00:19.623) 0:01:56.144 *********** 2025-05-13 20:17:31.995908 | orchestrator | 2025-05-13 20:17:31.995921 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-13 20:17:31.995930 | orchestrator | Tuesday 13 May 2025 20:15:19 +0000 (0:00:00.068) 0:01:56.212 *********** 2025-05-13 20:17:31.995937 | orchestrator | 2025-05-13 20:17:31.995945 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-13 20:17:31.995953 | orchestrator | Tuesday 13 May 2025 20:15:19 +0000 (0:00:00.073) 0:01:56.285 *********** 2025-05-13 20:17:31.995966 | orchestrator | 2025-05-13 20:17:31.995974 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-13 20:17:31.995982 | orchestrator | Tuesday 13 May 2025 20:15:19 +0000 (0:00:00.066) 0:01:56.352 *********** 2025-05-13 20:17:31.995989 | orchestrator | 2025-05-13 20:17:31.995997 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-13 20:17:31.996005 | orchestrator | Tuesday 13 May 2025 20:15:19 +0000 (0:00:00.070) 0:01:56.422 *********** 2025-05-13 20:17:31.996013 | orchestrator | 2025-05-13 20:17:31.996021 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-13 20:17:31.996028 | orchestrator | Tuesday 13 May 2025 20:15:20 +0000 (0:00:00.065) 0:01:56.487 *********** 2025-05-13 20:17:31.996036 | orchestrator | 2025-05-13 20:17:31.996044 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-05-13 20:17:31.996051 | orchestrator | Tuesday 13 May 2025 20:15:20 +0000 (0:00:00.070) 0:01:56.558 *********** 2025-05-13 20:17:31.996059 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:17:31.996067 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:17:31.996074 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:17:31.996082 | orchestrator | 2025-05-13 20:17:31.996090 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-05-13 20:17:31.996097 | orchestrator | Tuesday 13 May 2025 20:15:45 +0000 (0:00:25.406) 0:02:21.964 *********** 2025-05-13 20:17:31.996105 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:17:31.996113 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:17:31.996120 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:17:31.996128 | orchestrator | 2025-05-13 20:17:31.996136 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-05-13 20:17:31.996143 | orchestrator | Tuesday 13 May 2025 20:15:56 +0000 (0:00:11.118) 0:02:33.083 *********** 2025-05-13 20:17:31.996151 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:17:31.996159 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:17:31.996166 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:17:31.996174 | orchestrator | 2025-05-13 20:17:31.996182 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-05-13 20:17:31.996189 | orchestrator | Tuesday 13 May 2025 20:17:19 +0000 (0:01:22.955) 0:03:56.038 *********** 2025-05-13 20:17:31.996197 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:17:31.996205 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:17:31.996212 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:17:31.996220 | orchestrator | 2025-05-13 20:17:31.996228 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-05-13 20:17:31.996236 | orchestrator | Tuesday 13 May 2025 20:17:28 +0000 (0:00:09.084) 0:04:05.123 *********** 2025-05-13 20:17:31.996244 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:17:31.996251 | orchestrator | 2025-05-13 20:17:31.996259 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:17:31.996267 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-13 20:17:31.996276 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-13 20:17:31.996284 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-13 20:17:31.996292 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-13 20:17:31.996299 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-13 20:17:31.996307 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-13 20:17:31.996320 | orchestrator | 2025-05-13 20:17:31.996328 | orchestrator | 2025-05-13 20:17:31.996336 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:17:31.996343 | orchestrator | Tuesday 13 May 2025 20:17:29 +0000 (0:00:00.987) 0:04:06.110 *********** 2025-05-13 20:17:31.996351 | orchestrator | =============================================================================== 2025-05-13 20:17:31.996359 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 82.96s 2025-05-13 20:17:31.996366 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 25.41s 2025-05-13 20:17:31.996374 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.62s 2025-05-13 20:17:31.996382 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 14.18s 2025-05-13 20:17:31.996389 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 11.12s 2025-05-13 20:17:31.996397 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 9.08s 2025-05-13 20:17:31.996408 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.66s 2025-05-13 20:17:31.996417 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.12s 2025-05-13 20:17:31.996429 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 5.80s 2025-05-13 20:17:31.996436 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 4.37s 2025-05-13 20:17:31.996444 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.81s 2025-05-13 20:17:31.996452 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.74s 2025-05-13 20:17:31.996460 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.46s 2025-05-13 20:17:31.996467 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.15s 2025-05-13 20:17:31.996475 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.10s 2025-05-13 20:17:31.996483 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.08s 2025-05-13 20:17:31.996491 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.99s 2025-05-13 20:17:31.996498 | orchestrator | cinder : Copying over config.json files for services -------------------- 2.91s 2025-05-13 20:17:31.996506 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.50s 2025-05-13 20:17:31.996514 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.44s 2025-05-13 20:17:31.996521 | orchestrator | 2025-05-13 20:17:31 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:17:31.996530 | orchestrator | 2025-05-13 20:17:31 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:17:35.028211 | orchestrator | 2025-05-13 20:17:35 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:17:35.029499 | orchestrator | 2025-05-13 20:17:35 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:17:35.031525 | orchestrator | 2025-05-13 20:17:35 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:17:35.032771 | orchestrator | 2025-05-13 20:17:35 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:17:35.034260 | orchestrator | 2025-05-13 20:17:35 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:17:38.068656 | orchestrator | 2025-05-13 20:17:38 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:17:38.068904 | orchestrator | 2025-05-13 20:17:38 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:17:38.069790 | orchestrator | 2025-05-13 20:17:38 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:17:38.070714 | orchestrator | 2025-05-13 20:17:38 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:17:38.070739 | orchestrator | 2025-05-13 20:17:38 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:17:41.108641 | orchestrator | 2025-05-13 20:17:41 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:17:41.108811 | orchestrator | 2025-05-13 20:17:41 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:17:41.109920 | orchestrator | 2025-05-13 20:17:41 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:17:41.110612 | orchestrator | 2025-05-13 20:17:41 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:17:41.110689 | orchestrator | 2025-05-13 20:17:41 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:17:44.156597 | orchestrator | 2025-05-13 20:17:44 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:17:44.159760 | orchestrator | 2025-05-13 20:17:44 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:17:44.162401 | orchestrator | 2025-05-13 20:17:44 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:17:44.163991 | orchestrator | 2025-05-13 20:17:44 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:17:44.164098 | orchestrator | 2025-05-13 20:17:44 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:17:47.205862 | orchestrator | 2025-05-13 20:17:47 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:17:47.206052 | orchestrator | 2025-05-13 20:17:47 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:17:47.206986 | orchestrator | 2025-05-13 20:17:47 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:17:47.208166 | orchestrator | 2025-05-13 20:17:47 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:17:47.208382 | orchestrator | 2025-05-13 20:17:47 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:17:50.251400 | orchestrator | 2025-05-13 20:17:50 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:17:50.251477 | orchestrator | 2025-05-13 20:17:50 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:17:50.252819 | orchestrator | 2025-05-13 20:17:50 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:17:50.254628 | orchestrator | 2025-05-13 20:17:50 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:17:50.255167 | orchestrator | 2025-05-13 20:17:50 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:17:53.298822 | orchestrator | 2025-05-13 20:17:53 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:17:53.299721 | orchestrator | 2025-05-13 20:17:53 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:17:53.300701 | orchestrator | 2025-05-13 20:17:53 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:17:53.302862 | orchestrator | 2025-05-13 20:17:53 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:17:53.302913 | orchestrator | 2025-05-13 20:17:53 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:17:56.340965 | orchestrator | 2025-05-13 20:17:56 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:17:56.342270 | orchestrator | 2025-05-13 20:17:56 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:17:56.343127 | orchestrator | 2025-05-13 20:17:56 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:17:56.344128 | orchestrator | 2025-05-13 20:17:56 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:17:56.344268 | orchestrator | 2025-05-13 20:17:56 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:17:59.392032 | orchestrator | 2025-05-13 20:17:59 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:17:59.393123 | orchestrator | 2025-05-13 20:17:59 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:17:59.396326 | orchestrator | 2025-05-13 20:17:59 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:17:59.396994 | orchestrator | 2025-05-13 20:17:59 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:17:59.397024 | orchestrator | 2025-05-13 20:17:59 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:18:02.452353 | orchestrator | 2025-05-13 20:18:02 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:18:02.452442 | orchestrator | 2025-05-13 20:18:02 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:18:02.452804 | orchestrator | 2025-05-13 20:18:02 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:18:02.453669 | orchestrator | 2025-05-13 20:18:02 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:18:02.453699 | orchestrator | 2025-05-13 20:18:02 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:18:05.511040 | orchestrator | 2025-05-13 20:18:05 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:18:05.512305 | orchestrator | 2025-05-13 20:18:05 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:18:05.514501 | orchestrator | 2025-05-13 20:18:05 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:18:05.516746 | orchestrator | 2025-05-13 20:18:05 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:18:05.516796 | orchestrator | 2025-05-13 20:18:05 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:18:08.570127 | orchestrator | 2025-05-13 20:18:08 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:18:08.571297 | orchestrator | 2025-05-13 20:18:08 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state STARTED 2025-05-13 20:18:08.572500 | orchestrator | 2025-05-13 20:18:08 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:18:08.573484 | orchestrator | 2025-05-13 20:18:08 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:18:08.574181 | orchestrator | 2025-05-13 20:18:08 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:18:11.630070 | orchestrator | 2025-05-13 20:18:11 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:18:11.632046 | orchestrator | 2025-05-13 20:18:11 | INFO  | Task 9cb487cf-d120-49b1-9b33-776a7248f27e is in state SUCCESS 2025-05-13 20:18:11.633401 | orchestrator | 2025-05-13 20:18:11.633442 | orchestrator | 2025-05-13 20:18:11.633455 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:18:11.633467 | orchestrator | 2025-05-13 20:18:11.633478 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:18:11.633490 | orchestrator | Tuesday 13 May 2025 20:16:12 +0000 (0:00:00.461) 0:00:00.461 *********** 2025-05-13 20:18:11.633526 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:18:11.633539 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:18:11.633549 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:18:11.633560 | orchestrator | 2025-05-13 20:18:11.633952 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:18:11.633983 | orchestrator | Tuesday 13 May 2025 20:16:13 +0000 (0:00:00.342) 0:00:00.804 *********** 2025-05-13 20:18:11.634001 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-05-13 20:18:11.634061 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-05-13 20:18:11.634076 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-05-13 20:18:11.634087 | orchestrator | 2025-05-13 20:18:11.634098 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-05-13 20:18:11.634109 | orchestrator | 2025-05-13 20:18:11.634120 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-13 20:18:11.634131 | orchestrator | Tuesday 13 May 2025 20:16:13 +0000 (0:00:00.484) 0:00:01.288 *********** 2025-05-13 20:18:11.634143 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:18:11.634154 | orchestrator | 2025-05-13 20:18:11.634165 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-05-13 20:18:11.634176 | orchestrator | Tuesday 13 May 2025 20:16:14 +0000 (0:00:01.155) 0:00:02.444 *********** 2025-05-13 20:18:11.634188 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-05-13 20:18:11.634199 | orchestrator | 2025-05-13 20:18:11.634210 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-05-13 20:18:11.634220 | orchestrator | Tuesday 13 May 2025 20:16:18 +0000 (0:00:03.729) 0:00:06.174 *********** 2025-05-13 20:18:11.634252 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-05-13 20:18:11.634265 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-05-13 20:18:11.634276 | orchestrator | 2025-05-13 20:18:11.634286 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-05-13 20:18:11.634297 | orchestrator | Tuesday 13 May 2025 20:16:24 +0000 (0:00:06.106) 0:00:12.281 *********** 2025-05-13 20:18:11.634308 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-13 20:18:11.634319 | orchestrator | 2025-05-13 20:18:11.634330 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-05-13 20:18:11.634341 | orchestrator | Tuesday 13 May 2025 20:16:28 +0000 (0:00:03.357) 0:00:15.638 *********** 2025-05-13 20:18:11.634352 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-13 20:18:11.634363 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-05-13 20:18:11.634378 | orchestrator | 2025-05-13 20:18:11.634401 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-05-13 20:18:11.634426 | orchestrator | Tuesday 13 May 2025 20:16:31 +0000 (0:00:03.915) 0:00:19.554 *********** 2025-05-13 20:18:11.634442 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-13 20:18:11.634459 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-05-13 20:18:11.634476 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-05-13 20:18:11.634493 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-05-13 20:18:11.634510 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-05-13 20:18:11.634526 | orchestrator | 2025-05-13 20:18:11.634544 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-05-13 20:18:11.634560 | orchestrator | Tuesday 13 May 2025 20:16:47 +0000 (0:00:15.351) 0:00:34.905 *********** 2025-05-13 20:18:11.634578 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-05-13 20:18:11.634594 | orchestrator | 2025-05-13 20:18:11.634612 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-05-13 20:18:11.634650 | orchestrator | Tuesday 13 May 2025 20:16:51 +0000 (0:00:04.029) 0:00:38.935 *********** 2025-05-13 20:18:11.634688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 20:18:11.634732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 20:18:11.634753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 20:18:11.634773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.634793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.634825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.634866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.634889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.634950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.634969 | orchestrator | 2025-05-13 20:18:11.634987 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-05-13 20:18:11.635005 | orchestrator | Tuesday 13 May 2025 20:16:53 +0000 (0:00:02.449) 0:00:41.385 *********** 2025-05-13 20:18:11.635024 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-05-13 20:18:11.635043 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-05-13 20:18:11.635054 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-05-13 20:18:11.635065 | orchestrator | 2025-05-13 20:18:11.635076 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-05-13 20:18:11.635086 | orchestrator | Tuesday 13 May 2025 20:16:54 +0000 (0:00:00.913) 0:00:42.298 *********** 2025-05-13 20:18:11.635097 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:18:11.635108 | orchestrator | 2025-05-13 20:18:11.635118 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-05-13 20:18:11.635130 | orchestrator | Tuesday 13 May 2025 20:16:54 +0000 (0:00:00.211) 0:00:42.510 *********** 2025-05-13 20:18:11.635140 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:18:11.635151 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:18:11.635161 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:18:11.635183 | orchestrator | 2025-05-13 20:18:11.635194 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-13 20:18:11.635204 | orchestrator | Tuesday 13 May 2025 20:16:56 +0000 (0:00:01.172) 0:00:43.683 *********** 2025-05-13 20:18:11.635215 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:18:11.635226 | orchestrator | 2025-05-13 20:18:11.635236 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-05-13 20:18:11.635247 | orchestrator | Tuesday 13 May 2025 20:16:56 +0000 (0:00:00.550) 0:00:44.233 *********** 2025-05-13 20:18:11.635258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 20:18:11.635281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 20:18:11.635335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 20:18:11.635348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.635508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.635524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.635536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.635562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.635575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.635586 | orchestrator | 2025-05-13 20:18:11.635597 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-05-13 20:18:11.635609 | orchestrator | Tuesday 13 May 2025 20:17:00 +0000 (0:00:03.859) 0:00:48.092 *********** 2025-05-13 20:18:11.635620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-13 20:18:11.635641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 20:18:11.635654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:18:11.635665 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:18:11.635689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-13 20:18:11.635702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 20:18:11.635713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:18:11.635724 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:18:11.635743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-13 20:18:11.635754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 20:18:11.635766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:18:11.635777 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:18:11.635788 | orchestrator | 2025-05-13 20:18:11.635799 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-05-13 20:18:11.635810 | orchestrator | Tuesday 13 May 2025 20:17:02 +0000 (0:00:01.880) 0:00:49.973 *********** 2025-05-13 20:18:11.635834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-13 20:18:11.635846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 20:18:11.635865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:18:11.635876 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:18:11.635887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-13 20:18:11.635929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 20:18:11.635947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:18:11.635958 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:18:11.635988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-13 20:18:11.636028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 20:18:11.636053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:18:11.636072 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:18:11.636090 | orchestrator | 2025-05-13 20:18:11.636109 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-05-13 20:18:11.636127 | orchestrator | Tuesday 13 May 2025 20:17:03 +0000 (0:00:01.110) 0:00:51.083 *********** 2025-05-13 20:18:11.636145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 20:18:11.636183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 20:18:11.636204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 20:18:11.636239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.636259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.636278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.636296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.636321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.636335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.636355 | orchestrator | 2025-05-13 20:18:11.636367 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-05-13 20:18:11.636379 | orchestrator | Tuesday 13 May 2025 20:17:07 +0000 (0:00:04.320) 0:00:55.404 *********** 2025-05-13 20:18:11.636391 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:18:11.636403 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:18:11.636416 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:18:11.636428 | orchestrator | 2025-05-13 20:18:11.636440 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-05-13 20:18:11.636451 | orchestrator | Tuesday 13 May 2025 20:17:11 +0000 (0:00:03.650) 0:00:59.054 *********** 2025-05-13 20:18:11.636463 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 20:18:11.636475 | orchestrator | 2025-05-13 20:18:11.636488 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-05-13 20:18:11.636500 | orchestrator | Tuesday 13 May 2025 20:17:13 +0000 (0:00:02.107) 0:01:01.162 *********** 2025-05-13 20:18:11.636511 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:18:11.636522 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:18:11.636533 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:18:11.636544 | orchestrator | 2025-05-13 20:18:11.636554 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-05-13 20:18:11.636565 | orchestrator | Tuesday 13 May 2025 20:17:14 +0000 (0:00:01.064) 0:01:02.226 *********** 2025-05-13 20:18:11.636576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 20:18:11.636588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 20:18:11.636611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 20:18:11.636632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.636644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.636655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.636666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.636677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.636697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.636715 | orchestrator | 2025-05-13 20:18:11.636726 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-05-13 20:18:11.636737 | orchestrator | Tuesday 13 May 2025 20:17:24 +0000 (0:00:09.585) 0:01:11.811 *********** 2025-05-13 20:18:11.636758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-13 20:18:11.636778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 20:18:11.636791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:18:11.636802 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:18:11.636813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-13 20:18:11.636825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 20:18:11.636856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:18:11.636868 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:18:11.636879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-13 20:18:11.636891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-13 20:18:11.637110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:18:11.637124 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:18:11.637135 | orchestrator | 2025-05-13 20:18:11.637146 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-05-13 20:18:11.637157 | orchestrator | Tuesday 13 May 2025 20:17:25 +0000 (0:00:01.543) 0:01:13.355 *********** 2025-05-13 20:18:11.637169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 20:18:11.637207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 20:18:11.637218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-13 20:18:11.637228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.637239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.637249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.637270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.637289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.637299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:18:11.637309 | orchestrator | 2025-05-13 20:18:11.637318 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-13 20:18:11.637328 | orchestrator | Tuesday 13 May 2025 20:17:29 +0000 (0:00:03.300) 0:01:16.655 *********** 2025-05-13 20:18:11.637338 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:18:11.637347 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:18:11.637356 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:18:11.637366 | orchestrator | 2025-05-13 20:18:11.637375 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-05-13 20:18:11.637385 | orchestrator | Tuesday 13 May 2025 20:17:29 +0000 (0:00:00.623) 0:01:17.279 *********** 2025-05-13 20:18:11.637394 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:18:11.637403 | orchestrator | 2025-05-13 20:18:11.637413 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-05-13 20:18:11.637422 | orchestrator | Tuesday 13 May 2025 20:17:31 +0000 (0:00:02.349) 0:01:19.628 *********** 2025-05-13 20:18:11.637432 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:18:11.637441 | orchestrator | 2025-05-13 20:18:11.637451 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-05-13 20:18:11.637460 | orchestrator | Tuesday 13 May 2025 20:17:34 +0000 (0:00:02.291) 0:01:21.920 *********** 2025-05-13 20:18:11.637469 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:18:11.637479 | orchestrator | 2025-05-13 20:18:11.637488 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-13 20:18:11.637498 | orchestrator | Tuesday 13 May 2025 20:17:46 +0000 (0:00:11.723) 0:01:33.643 *********** 2025-05-13 20:18:11.637508 | orchestrator | 2025-05-13 20:18:11.637517 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-13 20:18:11.637526 | orchestrator | Tuesday 13 May 2025 20:17:46 +0000 (0:00:00.078) 0:01:33.722 *********** 2025-05-13 20:18:11.637536 | orchestrator | 2025-05-13 20:18:11.637545 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-13 20:18:11.637562 | orchestrator | Tuesday 13 May 2025 20:17:46 +0000 (0:00:00.072) 0:01:33.795 *********** 2025-05-13 20:18:11.637572 | orchestrator | 2025-05-13 20:18:11.637581 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-05-13 20:18:11.637590 | orchestrator | Tuesday 13 May 2025 20:17:46 +0000 (0:00:00.078) 0:01:33.873 *********** 2025-05-13 20:18:11.637600 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:18:11.637609 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:18:11.637619 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:18:11.637628 | orchestrator | 2025-05-13 20:18:11.637637 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-05-13 20:18:11.637647 | orchestrator | Tuesday 13 May 2025 20:17:59 +0000 (0:00:12.973) 0:01:46.847 *********** 2025-05-13 20:18:11.637656 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:18:11.637666 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:18:11.637675 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:18:11.637686 | orchestrator | 2025-05-13 20:18:11.637702 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-05-13 20:18:11.637718 | orchestrator | Tuesday 13 May 2025 20:18:03 +0000 (0:00:04.426) 0:01:51.274 *********** 2025-05-13 20:18:11.637735 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:18:11.637751 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:18:11.637766 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:18:11.637781 | orchestrator | 2025-05-13 20:18:11.637795 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:18:11.637811 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-13 20:18:11.637828 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-13 20:18:11.637843 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-13 20:18:11.637858 | orchestrator | 2025-05-13 20:18:11.637874 | orchestrator | 2025-05-13 20:18:11.637889 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:18:11.637939 | orchestrator | Tuesday 13 May 2025 20:18:08 +0000 (0:00:05.119) 0:01:56.393 *********** 2025-05-13 20:18:11.637955 | orchestrator | =============================================================================== 2025-05-13 20:18:11.637969 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.35s 2025-05-13 20:18:11.637993 | orchestrator | barbican : Restart barbican-api container ------------------------------ 12.97s 2025-05-13 20:18:11.638008 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.72s 2025-05-13 20:18:11.638087 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.59s 2025-05-13 20:18:11.638103 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.11s 2025-05-13 20:18:11.638118 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 5.12s 2025-05-13 20:18:11.638134 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 4.43s 2025-05-13 20:18:11.638149 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.32s 2025-05-13 20:18:11.638164 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.03s 2025-05-13 20:18:11.638179 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.91s 2025-05-13 20:18:11.638193 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.86s 2025-05-13 20:18:11.638208 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.73s 2025-05-13 20:18:11.638337 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.65s 2025-05-13 20:18:11.638361 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.36s 2025-05-13 20:18:11.638507 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.30s 2025-05-13 20:18:11.638531 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.45s 2025-05-13 20:18:11.638547 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.35s 2025-05-13 20:18:11.638564 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.29s 2025-05-13 20:18:11.638581 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.11s 2025-05-13 20:18:11.638597 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.88s 2025-05-13 20:18:11.638615 | orchestrator | 2025-05-13 20:18:11 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:18:11.638627 | orchestrator | 2025-05-13 20:18:11 | INFO  | Task 2c553af8-e69c-4ea2-91b4-c7ebe4b0e67f is in state STARTED 2025-05-13 20:18:11.638646 | orchestrator | 2025-05-13 20:18:11 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:18:11.638656 | orchestrator | 2025-05-13 20:18:11 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:18:14.681789 | orchestrator | 2025-05-13 20:18:14 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:18:14.682407 | orchestrator | 2025-05-13 20:18:14 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:18:14.683439 | orchestrator | 2025-05-13 20:18:14 | INFO  | Task 2c553af8-e69c-4ea2-91b4-c7ebe4b0e67f is in state STARTED 2025-05-13 20:18:14.684997 | orchestrator | 2025-05-13 20:18:14 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:18:14.685027 | orchestrator | 2025-05-13 20:18:14 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:18:17.732650 | orchestrator | 2025-05-13 20:18:17 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:18:17.733333 | orchestrator | 2025-05-13 20:18:17 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:18:17.734232 | orchestrator | 2025-05-13 20:18:17 | INFO  | Task 2c553af8-e69c-4ea2-91b4-c7ebe4b0e67f is in state STARTED 2025-05-13 20:18:17.736698 | orchestrator | 2025-05-13 20:18:17 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:18:17.736771 | orchestrator | 2025-05-13 20:18:17 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:18:20.780740 | orchestrator | 2025-05-13 20:18:20 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:18:20.781084 | orchestrator | 2025-05-13 20:18:20 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:18:20.782449 | orchestrator | 2025-05-13 20:18:20 | INFO  | Task 2c553af8-e69c-4ea2-91b4-c7ebe4b0e67f is in state STARTED 2025-05-13 20:18:20.783877 | orchestrator | 2025-05-13 20:18:20 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:18:20.783912 | orchestrator | 2025-05-13 20:18:20 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:18:23.827765 | orchestrator | 2025-05-13 20:18:23 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:18:23.828565 | orchestrator | 2025-05-13 20:18:23 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:18:23.828918 | orchestrator | 2025-05-13 20:18:23 | INFO  | Task 2c553af8-e69c-4ea2-91b4-c7ebe4b0e67f is in state STARTED 2025-05-13 20:18:23.829755 | orchestrator | 2025-05-13 20:18:23 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:18:23.829786 | orchestrator | 2025-05-13 20:18:23 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:18:26.872314 | orchestrator | 2025-05-13 20:18:26 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:18:26.873080 | orchestrator | 2025-05-13 20:18:26 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:18:26.873115 | orchestrator | 2025-05-13 20:18:26 | INFO  | Task 2c553af8-e69c-4ea2-91b4-c7ebe4b0e67f is in state STARTED 2025-05-13 20:18:26.873758 | orchestrator | 2025-05-13 20:18:26 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:18:26.873780 | orchestrator | 2025-05-13 20:18:26 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:18:29.915886 | orchestrator | 2025-05-13 20:18:29 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:18:29.917845 | orchestrator | 2025-05-13 20:18:29 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:18:29.918798 | orchestrator | 2025-05-13 20:18:29 | INFO  | Task 2c553af8-e69c-4ea2-91b4-c7ebe4b0e67f is in state STARTED 2025-05-13 20:18:29.919677 | orchestrator | 2025-05-13 20:18:29 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:18:29.919708 | orchestrator | 2025-05-13 20:18:29 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:18:32.960276 | orchestrator | 2025-05-13 20:18:32 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:18:32.962480 | orchestrator | 2025-05-13 20:18:32 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:18:32.963230 | orchestrator | 2025-05-13 20:18:32 | INFO  | Task 2c553af8-e69c-4ea2-91b4-c7ebe4b0e67f is in state STARTED 2025-05-13 20:18:32.964249 | orchestrator | 2025-05-13 20:18:32 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:18:32.964308 | orchestrator | 2025-05-13 20:18:32 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:18:36.007119 | orchestrator | 2025-05-13 20:18:36 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:18:36.013095 | orchestrator | 2025-05-13 20:18:36 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:18:36.013804 | orchestrator | 2025-05-13 20:18:36 | INFO  | Task 2c553af8-e69c-4ea2-91b4-c7ebe4b0e67f is in state STARTED 2025-05-13 20:18:36.016411 | orchestrator | 2025-05-13 20:18:36 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:18:36.016472 | orchestrator | 2025-05-13 20:18:36 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:18:39.050347 | orchestrator | 2025-05-13 20:18:39 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:18:39.051902 | orchestrator | 2025-05-13 20:18:39 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:18:39.052352 | orchestrator | 2025-05-13 20:18:39 | INFO  | Task 2c553af8-e69c-4ea2-91b4-c7ebe4b0e67f is in state STARTED 2025-05-13 20:18:39.053265 | orchestrator | 2025-05-13 20:18:39 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:18:39.053297 | orchestrator | 2025-05-13 20:18:39 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:18:42.117109 | orchestrator | 2025-05-13 20:18:42 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:18:42.117217 | orchestrator | 2025-05-13 20:18:42 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:18:42.117232 | orchestrator | 2025-05-13 20:18:42 | INFO  | Task 2c553af8-e69c-4ea2-91b4-c7ebe4b0e67f is in state STARTED 2025-05-13 20:18:42.117275 | orchestrator | 2025-05-13 20:18:42 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:18:42.117286 | orchestrator | 2025-05-13 20:18:42 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:18:45.153530 | orchestrator | 2025-05-13 20:18:45 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:18:45.155142 | orchestrator | 2025-05-13 20:18:45 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:18:45.157161 | orchestrator | 2025-05-13 20:18:45 | INFO  | Task 2c553af8-e69c-4ea2-91b4-c7ebe4b0e67f is in state STARTED 2025-05-13 20:18:45.159312 | orchestrator | 2025-05-13 20:18:45 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:18:45.159383 | orchestrator | 2025-05-13 20:18:45 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:18:48.191937 | orchestrator | 2025-05-13 20:18:48 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:18:48.193225 | orchestrator | 2025-05-13 20:18:48 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:18:48.196298 | orchestrator | 2025-05-13 20:18:48 | INFO  | Task 2c553af8-e69c-4ea2-91b4-c7ebe4b0e67f is in state STARTED 2025-05-13 20:18:48.196428 | orchestrator | 2025-05-13 20:18:48 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:18:48.196444 | orchestrator | 2025-05-13 20:18:48 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:18:51.220239 | orchestrator | 2025-05-13 20:18:51 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:18:51.220348 | orchestrator | 2025-05-13 20:18:51 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:18:51.220694 | orchestrator | 2025-05-13 20:18:51 | INFO  | Task 2c553af8-e69c-4ea2-91b4-c7ebe4b0e67f is in state STARTED 2025-05-13 20:18:51.221390 | orchestrator | 2025-05-13 20:18:51 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:18:51.221418 | orchestrator | 2025-05-13 20:18:51 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:18:54.260286 | orchestrator | 2025-05-13 20:18:54 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:18:54.261501 | orchestrator | 2025-05-13 20:18:54 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:18:54.261815 | orchestrator | 2025-05-13 20:18:54 | INFO  | Task 2c553af8-e69c-4ea2-91b4-c7ebe4b0e67f is in state STARTED 2025-05-13 20:18:54.263572 | orchestrator | 2025-05-13 20:18:54 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:18:54.263611 | orchestrator | 2025-05-13 20:18:54 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:18:57.317022 | orchestrator | 2025-05-13 20:18:57 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:18:57.319400 | orchestrator | 2025-05-13 20:18:57 | INFO  | Task 58b8cfe9-15b4-4d12-83ef-4d7e0e97599b is in state STARTED 2025-05-13 20:18:57.321346 | orchestrator | 2025-05-13 20:18:57 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:18:57.322343 | orchestrator | 2025-05-13 20:18:57 | INFO  | Task 2c553af8-e69c-4ea2-91b4-c7ebe4b0e67f is in state SUCCESS 2025-05-13 20:18:57.324539 | orchestrator | 2025-05-13 20:18:57 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:18:57.324637 | orchestrator | 2025-05-13 20:18:57 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:19:00.382700 | orchestrator | 2025-05-13 20:19:00 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:19:00.383254 | orchestrator | 2025-05-13 20:19:00 | INFO  | Task 58b8cfe9-15b4-4d12-83ef-4d7e0e97599b is in state STARTED 2025-05-13 20:19:00.384233 | orchestrator | 2025-05-13 20:19:00 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:19:00.385696 | orchestrator | 2025-05-13 20:19:00 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:19:00.385740 | orchestrator | 2025-05-13 20:19:00 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:19:03.434675 | orchestrator | 2025-05-13 20:19:03 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:19:03.435080 | orchestrator | 2025-05-13 20:19:03 | INFO  | Task 58b8cfe9-15b4-4d12-83ef-4d7e0e97599b is in state STARTED 2025-05-13 20:19:03.438891 | orchestrator | 2025-05-13 20:19:03 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:19:03.439865 | orchestrator | 2025-05-13 20:19:03 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:19:03.439905 | orchestrator | 2025-05-13 20:19:03 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:19:06.476780 | orchestrator | 2025-05-13 20:19:06 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:19:06.479514 | orchestrator | 2025-05-13 20:19:06 | INFO  | Task 58b8cfe9-15b4-4d12-83ef-4d7e0e97599b is in state STARTED 2025-05-13 20:19:06.479935 | orchestrator | 2025-05-13 20:19:06 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:19:06.485314 | orchestrator | 2025-05-13 20:19:06 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:19:06.485339 | orchestrator | 2025-05-13 20:19:06 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:19:09.550919 | orchestrator | 2025-05-13 20:19:09 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:19:09.551257 | orchestrator | 2025-05-13 20:19:09 | INFO  | Task 58b8cfe9-15b4-4d12-83ef-4d7e0e97599b is in state STARTED 2025-05-13 20:19:09.552379 | orchestrator | 2025-05-13 20:19:09 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:19:09.552571 | orchestrator | 2025-05-13 20:19:09 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:19:09.552587 | orchestrator | 2025-05-13 20:19:09 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:19:12.598663 | orchestrator | 2025-05-13 20:19:12 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:19:12.598794 | orchestrator | 2025-05-13 20:19:12 | INFO  | Task 58b8cfe9-15b4-4d12-83ef-4d7e0e97599b is in state STARTED 2025-05-13 20:19:12.604870 | orchestrator | 2025-05-13 20:19:12 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:19:12.610434 | orchestrator | 2025-05-13 20:19:12 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:19:12.610530 | orchestrator | 2025-05-13 20:19:12 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:19:15.659581 | orchestrator | 2025-05-13 20:19:15 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:19:15.661908 | orchestrator | 2025-05-13 20:19:15 | INFO  | Task 58b8cfe9-15b4-4d12-83ef-4d7e0e97599b is in state STARTED 2025-05-13 20:19:15.663870 | orchestrator | 2025-05-13 20:19:15 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:19:15.666787 | orchestrator | 2025-05-13 20:19:15 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:19:15.666879 | orchestrator | 2025-05-13 20:19:15 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:19:18.706571 | orchestrator | 2025-05-13 20:19:18 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:19:18.706705 | orchestrator | 2025-05-13 20:19:18 | INFO  | Task 58b8cfe9-15b4-4d12-83ef-4d7e0e97599b is in state STARTED 2025-05-13 20:19:18.707579 | orchestrator | 2025-05-13 20:19:18 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:19:18.708475 | orchestrator | 2025-05-13 20:19:18 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:19:18.708520 | orchestrator | 2025-05-13 20:19:18 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:19:21.764857 | orchestrator | 2025-05-13 20:19:21 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:19:21.764951 | orchestrator | 2025-05-13 20:19:21 | INFO  | Task 58b8cfe9-15b4-4d12-83ef-4d7e0e97599b is in state STARTED 2025-05-13 20:19:21.766516 | orchestrator | 2025-05-13 20:19:21 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:19:21.766927 | orchestrator | 2025-05-13 20:19:21 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:19:21.766977 | orchestrator | 2025-05-13 20:19:21 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:19:24.792756 | orchestrator | 2025-05-13 20:19:24 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:19:24.792912 | orchestrator | 2025-05-13 20:19:24 | INFO  | Task 58b8cfe9-15b4-4d12-83ef-4d7e0e97599b is in state STARTED 2025-05-13 20:19:24.793807 | orchestrator | 2025-05-13 20:19:24 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:19:24.794415 | orchestrator | 2025-05-13 20:19:24 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:19:24.794445 | orchestrator | 2025-05-13 20:19:24 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:19:27.831474 | orchestrator | 2025-05-13 20:19:27 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:19:27.832381 | orchestrator | 2025-05-13 20:19:27 | INFO  | Task 58b8cfe9-15b4-4d12-83ef-4d7e0e97599b is in state STARTED 2025-05-13 20:19:27.833191 | orchestrator | 2025-05-13 20:19:27 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:19:27.834570 | orchestrator | 2025-05-13 20:19:27 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:19:27.834646 | orchestrator | 2025-05-13 20:19:27 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:19:30.894454 | orchestrator | 2025-05-13 20:19:30 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:19:30.895429 | orchestrator | 2025-05-13 20:19:30 | INFO  | Task 58b8cfe9-15b4-4d12-83ef-4d7e0e97599b is in state STARTED 2025-05-13 20:19:30.901107 | orchestrator | 2025-05-13 20:19:30 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:19:30.901478 | orchestrator | 2025-05-13 20:19:30 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:19:30.901831 | orchestrator | 2025-05-13 20:19:30 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:19:33.957359 | orchestrator | 2025-05-13 20:19:33 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:19:33.960474 | orchestrator | 2025-05-13 20:19:33 | INFO  | Task 58b8cfe9-15b4-4d12-83ef-4d7e0e97599b is in state STARTED 2025-05-13 20:19:33.962116 | orchestrator | 2025-05-13 20:19:33 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:19:33.969665 | orchestrator | 2025-05-13 20:19:33 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:19:33.969728 | orchestrator | 2025-05-13 20:19:33 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:19:37.025086 | orchestrator | 2025-05-13 20:19:37 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:19:37.025583 | orchestrator | 2025-05-13 20:19:37 | INFO  | Task 58b8cfe9-15b4-4d12-83ef-4d7e0e97599b is in state STARTED 2025-05-13 20:19:37.029290 | orchestrator | 2025-05-13 20:19:37 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:19:37.029342 | orchestrator | 2025-05-13 20:19:37 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:19:37.029349 | orchestrator | 2025-05-13 20:19:37 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:19:40.070697 | orchestrator | 2025-05-13 20:19:40 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:19:40.074714 | orchestrator | 2025-05-13 20:19:40 | INFO  | Task 58b8cfe9-15b4-4d12-83ef-4d7e0e97599b is in state STARTED 2025-05-13 20:19:40.076263 | orchestrator | 2025-05-13 20:19:40 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:19:40.079569 | orchestrator | 2025-05-13 20:19:40 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:19:40.079611 | orchestrator | 2025-05-13 20:19:40 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:19:43.113719 | orchestrator | 2025-05-13 20:19:43 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:19:43.114204 | orchestrator | 2025-05-13 20:19:43 | INFO  | Task 58b8cfe9-15b4-4d12-83ef-4d7e0e97599b is in state STARTED 2025-05-13 20:19:43.115554 | orchestrator | 2025-05-13 20:19:43 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:19:43.116636 | orchestrator | 2025-05-13 20:19:43 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:19:43.116671 | orchestrator | 2025-05-13 20:19:43 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:19:46.144732 | orchestrator | 2025-05-13 20:19:46 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:19:46.145125 | orchestrator | 2025-05-13 20:19:46 | INFO  | Task 58b8cfe9-15b4-4d12-83ef-4d7e0e97599b is in state STARTED 2025-05-13 20:19:46.146594 | orchestrator | 2025-05-13 20:19:46 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:19:46.146953 | orchestrator | 2025-05-13 20:19:46 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:19:46.147179 | orchestrator | 2025-05-13 20:19:46 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:19:49.203963 | orchestrator | 2025-05-13 20:19:49 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:19:49.205074 | orchestrator | 2025-05-13 20:19:49 | INFO  | Task 58b8cfe9-15b4-4d12-83ef-4d7e0e97599b is in state STARTED 2025-05-13 20:19:49.205780 | orchestrator | 2025-05-13 20:19:49 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:19:49.207308 | orchestrator | 2025-05-13 20:19:49 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:19:49.207369 | orchestrator | 2025-05-13 20:19:49 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:19:52.243986 | orchestrator | 2025-05-13 20:19:52 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:19:52.244412 | orchestrator | 2025-05-13 20:19:52 | INFO  | Task 58b8cfe9-15b4-4d12-83ef-4d7e0e97599b is in state STARTED 2025-05-13 20:19:52.244994 | orchestrator | 2025-05-13 20:19:52 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:19:52.246903 | orchestrator | 2025-05-13 20:19:52 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:19:52.248101 | orchestrator | 2025-05-13 20:19:52 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:19:55.296032 | orchestrator | 2025-05-13 20:19:55 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:19:55.298102 | orchestrator | 2025-05-13 20:19:55 | INFO  | Task 58b8cfe9-15b4-4d12-83ef-4d7e0e97599b is in state STARTED 2025-05-13 20:19:55.300001 | orchestrator | 2025-05-13 20:19:55 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:19:55.301586 | orchestrator | 2025-05-13 20:19:55 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:19:55.301653 | orchestrator | 2025-05-13 20:19:55 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:19:58.357666 | orchestrator | 2025-05-13 20:19:58 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:19:58.357766 | orchestrator | 2025-05-13 20:19:58 | INFO  | Task 58b8cfe9-15b4-4d12-83ef-4d7e0e97599b is in state STARTED 2025-05-13 20:19:58.358345 | orchestrator | 2025-05-13 20:19:58 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:19:58.359559 | orchestrator | 2025-05-13 20:19:58 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:19:58.359592 | orchestrator | 2025-05-13 20:19:58 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:20:01.406401 | orchestrator | 2025-05-13 20:20:01 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:20:01.406940 | orchestrator | 2025-05-13 20:20:01 | INFO  | Task 841ea264-1647-44fd-aa22-cabc51a59943 is in state STARTED 2025-05-13 20:20:01.410280 | orchestrator | 2025-05-13 20:20:01 | INFO  | Task 58b8cfe9-15b4-4d12-83ef-4d7e0e97599b is in state STARTED 2025-05-13 20:20:01.410624 | orchestrator | 2025-05-13 20:20:01 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:20:01.411777 | orchestrator | 2025-05-13 20:20:01 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:20:01.411823 | orchestrator | 2025-05-13 20:20:01 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:20:04.456785 | orchestrator | 2025-05-13 20:20:04 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:20:04.456957 | orchestrator | 2025-05-13 20:20:04 | INFO  | Task 841ea264-1647-44fd-aa22-cabc51a59943 is in state STARTED 2025-05-13 20:20:04.459371 | orchestrator | 2025-05-13 20:20:04 | INFO  | Task 58b8cfe9-15b4-4d12-83ef-4d7e0e97599b is in state STARTED 2025-05-13 20:20:04.459790 | orchestrator | 2025-05-13 20:20:04 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:20:04.460747 | orchestrator | 2025-05-13 20:20:04 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:20:04.460782 | orchestrator | 2025-05-13 20:20:04 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:20:07.509723 | orchestrator | 2025-05-13 20:20:07 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:20:07.511627 | orchestrator | 2025-05-13 20:20:07 | INFO  | Task 841ea264-1647-44fd-aa22-cabc51a59943 is in state STARTED 2025-05-13 20:20:07.515331 | orchestrator | 2025-05-13 20:20:07 | INFO  | Task 58b8cfe9-15b4-4d12-83ef-4d7e0e97599b is in state STARTED 2025-05-13 20:20:07.516854 | orchestrator | 2025-05-13 20:20:07 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:20:07.518215 | orchestrator | 2025-05-13 20:20:07 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:20:07.518249 | orchestrator | 2025-05-13 20:20:07 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:20:10.568288 | orchestrator | 2025-05-13 20:20:10 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:20:10.568966 | orchestrator | 2025-05-13 20:20:10 | INFO  | Task 841ea264-1647-44fd-aa22-cabc51a59943 is in state STARTED 2025-05-13 20:20:10.571594 | orchestrator | 2025-05-13 20:20:10 | INFO  | Task 58b8cfe9-15b4-4d12-83ef-4d7e0e97599b is in state STARTED 2025-05-13 20:20:10.571645 | orchestrator | 2025-05-13 20:20:10 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:20:10.572157 | orchestrator | 2025-05-13 20:20:10 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:20:10.572402 | orchestrator | 2025-05-13 20:20:10 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:20:13.623802 | orchestrator | 2025-05-13 20:20:13 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:20:13.625868 | orchestrator | 2025-05-13 20:20:13 | INFO  | Task 841ea264-1647-44fd-aa22-cabc51a59943 is in state STARTED 2025-05-13 20:20:13.629727 | orchestrator | 2025-05-13 20:20:13 | INFO  | Task 58b8cfe9-15b4-4d12-83ef-4d7e0e97599b is in state SUCCESS 2025-05-13 20:20:13.632449 | orchestrator | 2025-05-13 20:20:13.632505 | orchestrator | 2025-05-13 20:20:13.632516 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-05-13 20:20:13.632528 | orchestrator | 2025-05-13 20:20:13.632538 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-05-13 20:20:13.632548 | orchestrator | Tuesday 13 May 2025 20:18:13 +0000 (0:00:00.103) 0:00:00.103 *********** 2025-05-13 20:20:13.632558 | orchestrator | changed: [localhost] 2025-05-13 20:20:13.632569 | orchestrator | 2025-05-13 20:20:13.632579 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-05-13 20:20:13.632589 | orchestrator | Tuesday 13 May 2025 20:18:14 +0000 (0:00:01.298) 0:00:01.402 *********** 2025-05-13 20:20:13.632598 | orchestrator | changed: [localhost] 2025-05-13 20:20:13.632608 | orchestrator | 2025-05-13 20:20:13.632617 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-05-13 20:20:13.632627 | orchestrator | Tuesday 13 May 2025 20:18:49 +0000 (0:00:34.465) 0:00:35.867 *********** 2025-05-13 20:20:13.632636 | orchestrator | changed: [localhost] 2025-05-13 20:20:13.632645 | orchestrator | 2025-05-13 20:20:13.632655 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:20:13.632664 | orchestrator | 2025-05-13 20:20:13.632674 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:20:13.632683 | orchestrator | Tuesday 13 May 2025 20:18:53 +0000 (0:00:04.716) 0:00:40.587 *********** 2025-05-13 20:20:13.632693 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:20:13.632706 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:20:13.632724 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:20:13.632740 | orchestrator | 2025-05-13 20:20:13.632756 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:20:13.632771 | orchestrator | Tuesday 13 May 2025 20:18:54 +0000 (0:00:00.573) 0:00:41.160 *********** 2025-05-13 20:20:13.632790 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-05-13 20:20:13.632808 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-05-13 20:20:13.632825 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-05-13 20:20:13.632864 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-05-13 20:20:13.632875 | orchestrator | 2025-05-13 20:20:13.632884 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-05-13 20:20:13.632894 | orchestrator | skipping: no hosts matched 2025-05-13 20:20:13.632904 | orchestrator | 2025-05-13 20:20:13.632914 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:20:13.632924 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:20:13.632937 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:20:13.632950 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:20:13.632959 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:20:13.632969 | orchestrator | 2025-05-13 20:20:13.632979 | orchestrator | 2025-05-13 20:20:13.632989 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:20:13.632998 | orchestrator | Tuesday 13 May 2025 20:18:55 +0000 (0:00:00.839) 0:00:42.000 *********** 2025-05-13 20:20:13.633008 | orchestrator | =============================================================================== 2025-05-13 20:20:13.633017 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 34.47s 2025-05-13 20:20:13.633027 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.72s 2025-05-13 20:20:13.633036 | orchestrator | Ensure the destination directory exists --------------------------------- 1.30s 2025-05-13 20:20:13.633046 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.84s 2025-05-13 20:20:13.633055 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.57s 2025-05-13 20:20:13.633065 | orchestrator | 2025-05-13 20:20:13.633074 | orchestrator | 2025-05-13 20:20:13.633084 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:20:13.633094 | orchestrator | 2025-05-13 20:20:13.633103 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:20:13.633121 | orchestrator | Tuesday 13 May 2025 20:19:00 +0000 (0:00:00.356) 0:00:00.356 *********** 2025-05-13 20:20:13.633131 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:20:13.633141 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:20:13.633150 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:20:13.633160 | orchestrator | 2025-05-13 20:20:13.633170 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:20:13.633179 | orchestrator | Tuesday 13 May 2025 20:19:00 +0000 (0:00:00.322) 0:00:00.678 *********** 2025-05-13 20:20:13.633189 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-05-13 20:20:13.633199 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-05-13 20:20:13.633210 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-05-13 20:20:13.633227 | orchestrator | 2025-05-13 20:20:13.633243 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-05-13 20:20:13.633258 | orchestrator | 2025-05-13 20:20:13.633275 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-13 20:20:13.633291 | orchestrator | Tuesday 13 May 2025 20:19:01 +0000 (0:00:00.478) 0:00:01.156 *********** 2025-05-13 20:20:13.633306 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:20:13.633359 | orchestrator | 2025-05-13 20:20:13.633371 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-05-13 20:20:13.633381 | orchestrator | Tuesday 13 May 2025 20:19:01 +0000 (0:00:00.520) 0:00:01.677 *********** 2025-05-13 20:20:13.633405 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-05-13 20:20:13.633425 | orchestrator | 2025-05-13 20:20:13.633435 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-05-13 20:20:13.633444 | orchestrator | Tuesday 13 May 2025 20:19:05 +0000 (0:00:03.510) 0:00:05.187 *********** 2025-05-13 20:20:13.633454 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-05-13 20:20:13.633464 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-05-13 20:20:13.633474 | orchestrator | 2025-05-13 20:20:13.633483 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-05-13 20:20:13.633492 | orchestrator | Tuesday 13 May 2025 20:19:11 +0000 (0:00:06.267) 0:00:11.455 *********** 2025-05-13 20:20:13.633502 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-13 20:20:13.633511 | orchestrator | 2025-05-13 20:20:13.633520 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-05-13 20:20:13.633530 | orchestrator | Tuesday 13 May 2025 20:19:14 +0000 (0:00:03.442) 0:00:14.898 *********** 2025-05-13 20:20:13.633539 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-13 20:20:13.633548 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-05-13 20:20:13.633558 | orchestrator | 2025-05-13 20:20:13.633567 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-05-13 20:20:13.633577 | orchestrator | Tuesday 13 May 2025 20:19:18 +0000 (0:00:03.882) 0:00:18.780 *********** 2025-05-13 20:20:13.633586 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-13 20:20:13.633595 | orchestrator | 2025-05-13 20:20:13.633605 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-05-13 20:20:13.633614 | orchestrator | Tuesday 13 May 2025 20:19:22 +0000 (0:00:03.832) 0:00:22.613 *********** 2025-05-13 20:20:13.633624 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-05-13 20:20:13.633633 | orchestrator | 2025-05-13 20:20:13.633642 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-13 20:20:13.633652 | orchestrator | Tuesday 13 May 2025 20:19:26 +0000 (0:00:04.240) 0:00:26.853 *********** 2025-05-13 20:20:13.633661 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:13.633671 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:13.633680 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:13.633690 | orchestrator | 2025-05-13 20:20:13.633701 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-05-13 20:20:13.633717 | orchestrator | Tuesday 13 May 2025 20:19:27 +0000 (0:00:00.474) 0:00:27.327 *********** 2025-05-13 20:20:13.633739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 20:20:13.633770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 20:20:13.633815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 20:20:13.633830 | orchestrator | 2025-05-13 20:20:13.633840 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-05-13 20:20:13.633850 | orchestrator | Tuesday 13 May 2025 20:19:28 +0000 (0:00:01.414) 0:00:28.742 *********** 2025-05-13 20:20:13.633859 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:13.633869 | orchestrator | 2025-05-13 20:20:13.633878 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-05-13 20:20:13.633888 | orchestrator | Tuesday 13 May 2025 20:19:29 +0000 (0:00:00.309) 0:00:29.052 *********** 2025-05-13 20:20:13.633897 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:13.633907 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:13.633916 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:13.633926 | orchestrator | 2025-05-13 20:20:13.633935 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-13 20:20:13.633944 | orchestrator | Tuesday 13 May 2025 20:19:30 +0000 (0:00:01.130) 0:00:30.182 *********** 2025-05-13 20:20:13.633954 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:20:13.633964 | orchestrator | 2025-05-13 20:20:13.633973 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-05-13 20:20:13.634153 | orchestrator | Tuesday 13 May 2025 20:19:30 +0000 (0:00:00.618) 0:00:30.801 *********** 2025-05-13 20:20:13.634174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 20:20:13.634192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 20:20:13.634219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 20:20:13.634229 | orchestrator | 2025-05-13 20:20:13.634249 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-05-13 20:20:13.634259 | orchestrator | Tuesday 13 May 2025 20:19:32 +0000 (0:00:01.838) 0:00:32.639 *********** 2025-05-13 20:20:13.634269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-13 20:20:13.634279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-13 20:20:13.634289 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:13.634304 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:13.634390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-13 20:20:13.634422 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:13.634438 | orchestrator | 2025-05-13 20:20:13.634448 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-05-13 20:20:13.634463 | orchestrator | Tuesday 13 May 2025 20:19:34 +0000 (0:00:02.001) 0:00:34.641 *********** 2025-05-13 20:20:13.634474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-13 20:20:13.634484 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:13.634504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-13 20:20:13.634514 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:13.634525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-13 20:20:13.634535 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:13.634544 | orchestrator | 2025-05-13 20:20:13.634554 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-05-13 20:20:13.634563 | orchestrator | Tuesday 13 May 2025 20:19:35 +0000 (0:00:00.983) 0:00:35.624 *********** 2025-05-13 20:20:13.634581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 20:20:13.634597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 20:20:13.634617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 20:20:13.634628 | orchestrator | 2025-05-13 20:20:13.634638 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-05-13 20:20:13.634647 | orchestrator | Tuesday 13 May 2025 20:19:36 +0000 (0:00:01.304) 0:00:36.929 *********** 2025-05-13 20:20:13.634657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 20:20:13.634668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 20:20:13.634694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 20:20:13.634710 | orchestrator | 2025-05-13 20:20:13.634726 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-05-13 20:20:13.634744 | orchestrator | Tuesday 13 May 2025 20:19:40 +0000 (0:00:03.603) 0:00:40.532 *********** 2025-05-13 20:20:13.634760 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-13 20:20:13.634778 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-13 20:20:13.634795 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-13 20:20:13.634811 | orchestrator | 2025-05-13 20:20:13.634828 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-05-13 20:20:13.634853 | orchestrator | Tuesday 13 May 2025 20:19:42 +0000 (0:00:02.412) 0:00:42.944 *********** 2025-05-13 20:20:13.634870 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:20:13.634887 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:20:13.634903 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:20:13.634913 | orchestrator | 2025-05-13 20:20:13.634922 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-05-13 20:20:13.634932 | orchestrator | Tuesday 13 May 2025 20:19:44 +0000 (0:00:01.425) 0:00:44.369 *********** 2025-05-13 20:20:13.634942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-13 20:20:13.634952 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:13.634970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-13 20:20:13.634980 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:13.634996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-13 20:20:13.635007 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:13.635016 | orchestrator | 2025-05-13 20:20:13.635026 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-05-13 20:20:13.635035 | orchestrator | Tuesday 13 May 2025 20:19:45 +0000 (0:00:01.229) 0:00:45.599 *********** 2025-05-13 20:20:13.635053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 20:20:13.635064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 20:20:13.635081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-13 20:20:13.635091 | orchestrator | 2025-05-13 20:20:13.635101 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-05-13 20:20:13.635111 | orchestrator | Tuesday 13 May 2025 20:19:47 +0000 (0:00:01.525) 0:00:47.125 *********** 2025-05-13 20:20:13.635120 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:20:13.635132 | orchestrator | 2025-05-13 20:20:13.635148 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-05-13 20:20:13.635159 | orchestrator | Tuesday 13 May 2025 20:19:49 +0000 (0:00:02.293) 0:00:49.419 *********** 2025-05-13 20:20:13.635168 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:20:13.635178 | orchestrator | 2025-05-13 20:20:13.635187 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-05-13 20:20:13.635197 | orchestrator | Tuesday 13 May 2025 20:19:51 +0000 (0:00:02.361) 0:00:51.781 *********** 2025-05-13 20:20:13.635206 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:20:13.635216 | orchestrator | 2025-05-13 20:20:13.635225 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-13 20:20:13.635235 | orchestrator | Tuesday 13 May 2025 20:20:04 +0000 (0:00:13.023) 0:01:04.804 *********** 2025-05-13 20:20:13.635249 | orchestrator | 2025-05-13 20:20:13.635265 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-13 20:20:13.635281 | orchestrator | Tuesday 13 May 2025 20:20:04 +0000 (0:00:00.064) 0:01:04.869 *********** 2025-05-13 20:20:13.635298 | orchestrator | 2025-05-13 20:20:13.635312 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-13 20:20:13.635350 | orchestrator | Tuesday 13 May 2025 20:20:04 +0000 (0:00:00.061) 0:01:04.930 *********** 2025-05-13 20:20:13.635365 | orchestrator | 2025-05-13 20:20:13.635382 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-05-13 20:20:13.635407 | orchestrator | Tuesday 13 May 2025 20:20:04 +0000 (0:00:00.067) 0:01:04.997 *********** 2025-05-13 20:20:13.635424 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:20:13.635441 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:20:13.635601 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:20:13.635621 | orchestrator | 2025-05-13 20:20:13.635637 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:20:13.635656 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-13 20:20:13.635674 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 20:20:13.635691 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 20:20:13.635704 | orchestrator | 2025-05-13 20:20:13.635721 | orchestrator | 2025-05-13 20:20:13.635737 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:20:13.635754 | orchestrator | Tuesday 13 May 2025 20:20:11 +0000 (0:00:06.673) 0:01:11.671 *********** 2025-05-13 20:20:13.635772 | orchestrator | =============================================================================== 2025-05-13 20:20:13.635814 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.02s 2025-05-13 20:20:13.635826 | orchestrator | placement : Restart placement-api container ----------------------------- 6.67s 2025-05-13 20:20:13.635835 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.27s 2025-05-13 20:20:13.635845 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.25s 2025-05-13 20:20:13.635855 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.88s 2025-05-13 20:20:13.635865 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.83s 2025-05-13 20:20:13.635874 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.60s 2025-05-13 20:20:13.635884 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.51s 2025-05-13 20:20:13.635987 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.44s 2025-05-13 20:20:13.636000 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 2.41s 2025-05-13 20:20:13.636010 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.36s 2025-05-13 20:20:13.636020 | orchestrator | placement : Creating placement databases -------------------------------- 2.29s 2025-05-13 20:20:13.636030 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 2.00s 2025-05-13 20:20:13.636040 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.84s 2025-05-13 20:20:13.636050 | orchestrator | placement : Check placement containers ---------------------------------- 1.53s 2025-05-13 20:20:13.636059 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.43s 2025-05-13 20:20:13.636069 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.41s 2025-05-13 20:20:13.636078 | orchestrator | placement : Copying over config.json files for services ----------------- 1.30s 2025-05-13 20:20:13.636088 | orchestrator | placement : Copying over existing policy file --------------------------- 1.23s 2025-05-13 20:20:13.636098 | orchestrator | placement : Set placement policy file ----------------------------------- 1.13s 2025-05-13 20:20:13.636108 | orchestrator | 2025-05-13 20:20:13 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:20:13.636118 | orchestrator | 2025-05-13 20:20:13 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:20:13.636128 | orchestrator | 2025-05-13 20:20:13 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:20:13.636143 | orchestrator | 2025-05-13 20:20:13 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:20:16.692235 | orchestrator | 2025-05-13 20:20:16 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:20:16.694120 | orchestrator | 2025-05-13 20:20:16 | INFO  | Task 841ea264-1647-44fd-aa22-cabc51a59943 is in state STARTED 2025-05-13 20:20:16.696054 | orchestrator | 2025-05-13 20:20:16 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:20:16.697858 | orchestrator | 2025-05-13 20:20:16 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:20:16.699642 | orchestrator | 2025-05-13 20:20:16 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:20:16.699987 | orchestrator | 2025-05-13 20:20:16 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:20:19.751913 | orchestrator | 2025-05-13 20:20:19 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:20:19.755575 | orchestrator | 2025-05-13 20:20:19 | INFO  | Task 841ea264-1647-44fd-aa22-cabc51a59943 is in state STARTED 2025-05-13 20:20:19.756136 | orchestrator | 2025-05-13 20:20:19 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:20:19.756844 | orchestrator | 2025-05-13 20:20:19 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:20:19.757733 | orchestrator | 2025-05-13 20:20:19 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:20:19.757742 | orchestrator | 2025-05-13 20:20:19 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:20:22.811268 | orchestrator | 2025-05-13 20:20:22 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:20:22.813107 | orchestrator | 2025-05-13 20:20:22 | INFO  | Task 841ea264-1647-44fd-aa22-cabc51a59943 is in state SUCCESS 2025-05-13 20:20:22.815180 | orchestrator | 2025-05-13 20:20:22 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:20:22.816904 | orchestrator | 2025-05-13 20:20:22 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:20:22.819074 | orchestrator | 2025-05-13 20:20:22 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state STARTED 2025-05-13 20:20:22.819141 | orchestrator | 2025-05-13 20:20:22 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:20:25.868971 | orchestrator | 2025-05-13 20:20:25 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:20:25.870567 | orchestrator | 2025-05-13 20:20:25 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:20:25.872638 | orchestrator | 2025-05-13 20:20:25 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state STARTED 2025-05-13 20:20:25.874717 | orchestrator | 2025-05-13 20:20:25 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:20:25.878132 | orchestrator | 2025-05-13 20:20:25 | INFO  | Task 1af575ed-3bb4-479e-b463-a95e1113f9ac is in state SUCCESS 2025-05-13 20:20:25.880042 | orchestrator | 2025-05-13 20:20:25.880097 | orchestrator | None 2025-05-13 20:20:25.880107 | orchestrator | 2025-05-13 20:20:25.880116 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:20:25.880126 | orchestrator | 2025-05-13 20:20:25.880134 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:20:25.880143 | orchestrator | Tuesday 13 May 2025 20:15:27 +0000 (0:00:00.262) 0:00:00.262 *********** 2025-05-13 20:20:25.880151 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:20:25.880160 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:20:25.880168 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:20:25.880176 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:20:25.880183 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:20:25.880191 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:20:25.880199 | orchestrator | 2025-05-13 20:20:25.880207 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:20:25.880251 | orchestrator | Tuesday 13 May 2025 20:15:28 +0000 (0:00:00.685) 0:00:00.947 *********** 2025-05-13 20:20:25.880261 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-05-13 20:20:25.880270 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-05-13 20:20:25.880278 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-05-13 20:20:25.880286 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-05-13 20:20:25.880293 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-05-13 20:20:25.880301 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-05-13 20:20:25.880309 | orchestrator | 2025-05-13 20:20:25.880317 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-05-13 20:20:25.880325 | orchestrator | 2025-05-13 20:20:25.880333 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-13 20:20:25.880368 | orchestrator | Tuesday 13 May 2025 20:15:28 +0000 (0:00:00.596) 0:00:01.544 *********** 2025-05-13 20:20:25.880402 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:20:25.880412 | orchestrator | 2025-05-13 20:20:25.880420 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-05-13 20:20:25.880428 | orchestrator | Tuesday 13 May 2025 20:15:30 +0000 (0:00:01.231) 0:00:02.775 *********** 2025-05-13 20:20:25.880436 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:20:25.880444 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:20:25.880451 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:20:25.880459 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:20:25.880467 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:20:25.880474 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:20:25.880482 | orchestrator | 2025-05-13 20:20:25.880490 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-05-13 20:20:25.880498 | orchestrator | Tuesday 13 May 2025 20:15:31 +0000 (0:00:01.231) 0:00:04.007 *********** 2025-05-13 20:20:25.880505 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:20:25.880513 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:20:25.880521 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:20:25.880529 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:20:25.880536 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:20:25.880544 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:20:25.880552 | orchestrator | 2025-05-13 20:20:25.880559 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-05-13 20:20:25.880568 | orchestrator | Tuesday 13 May 2025 20:15:32 +0000 (0:00:01.160) 0:00:05.168 *********** 2025-05-13 20:20:25.880575 | orchestrator | ok: [testbed-node-0] => { 2025-05-13 20:20:25.880584 | orchestrator |  "changed": false, 2025-05-13 20:20:25.880592 | orchestrator |  "msg": "All assertions passed" 2025-05-13 20:20:25.880601 | orchestrator | } 2025-05-13 20:20:25.880609 | orchestrator | ok: [testbed-node-1] => { 2025-05-13 20:20:25.880762 | orchestrator |  "changed": false, 2025-05-13 20:20:25.880775 | orchestrator |  "msg": "All assertions passed" 2025-05-13 20:20:25.880783 | orchestrator | } 2025-05-13 20:20:25.880792 | orchestrator | ok: [testbed-node-2] => { 2025-05-13 20:20:25.880801 | orchestrator |  "changed": false, 2025-05-13 20:20:25.880810 | orchestrator |  "msg": "All assertions passed" 2025-05-13 20:20:25.880819 | orchestrator | } 2025-05-13 20:20:25.880828 | orchestrator | ok: [testbed-node-3] => { 2025-05-13 20:20:25.880837 | orchestrator |  "changed": false, 2025-05-13 20:20:25.880846 | orchestrator |  "msg": "All assertions passed" 2025-05-13 20:20:25.880853 | orchestrator | } 2025-05-13 20:20:25.880861 | orchestrator | ok: [testbed-node-4] => { 2025-05-13 20:20:25.880869 | orchestrator |  "changed": false, 2025-05-13 20:20:25.880877 | orchestrator |  "msg": "All assertions passed" 2025-05-13 20:20:25.880884 | orchestrator | } 2025-05-13 20:20:25.880892 | orchestrator | ok: [testbed-node-5] => { 2025-05-13 20:20:25.880900 | orchestrator |  "changed": false, 2025-05-13 20:20:25.880908 | orchestrator |  "msg": "All assertions passed" 2025-05-13 20:20:25.880915 | orchestrator | } 2025-05-13 20:20:25.880923 | orchestrator | 2025-05-13 20:20:25.880931 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-05-13 20:20:25.880939 | orchestrator | Tuesday 13 May 2025 20:15:33 +0000 (0:00:00.977) 0:00:06.146 *********** 2025-05-13 20:20:25.880947 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.880954 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.880962 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.880970 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.880978 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.880985 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.880993 | orchestrator | 2025-05-13 20:20:25.881001 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-05-13 20:20:25.881009 | orchestrator | Tuesday 13 May 2025 20:15:34 +0000 (0:00:00.770) 0:00:06.916 *********** 2025-05-13 20:20:25.881019 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-05-13 20:20:25.881042 | orchestrator | 2025-05-13 20:20:25.881055 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-05-13 20:20:25.881068 | orchestrator | Tuesday 13 May 2025 20:15:37 +0000 (0:00:03.343) 0:00:10.260 *********** 2025-05-13 20:20:25.881081 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-05-13 20:20:25.881095 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-05-13 20:20:25.881108 | orchestrator | 2025-05-13 20:20:25.881137 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-05-13 20:20:25.881149 | orchestrator | Tuesday 13 May 2025 20:15:43 +0000 (0:00:06.350) 0:00:16.610 *********** 2025-05-13 20:20:25.881162 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-13 20:20:25.881175 | orchestrator | 2025-05-13 20:20:25.881188 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-05-13 20:20:25.881199 | orchestrator | Tuesday 13 May 2025 20:15:47 +0000 (0:00:03.125) 0:00:19.736 *********** 2025-05-13 20:20:25.881212 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-13 20:20:25.881224 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-05-13 20:20:25.881236 | orchestrator | 2025-05-13 20:20:25.881248 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-05-13 20:20:25.881259 | orchestrator | Tuesday 13 May 2025 20:15:51 +0000 (0:00:03.946) 0:00:23.683 *********** 2025-05-13 20:20:25.881271 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-13 20:20:25.881282 | orchestrator | 2025-05-13 20:20:25.881293 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-05-13 20:20:25.881305 | orchestrator | Tuesday 13 May 2025 20:15:54 +0000 (0:00:03.505) 0:00:27.188 *********** 2025-05-13 20:20:25.881361 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-05-13 20:20:25.881377 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-05-13 20:20:25.881389 | orchestrator | 2025-05-13 20:20:25.881401 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-13 20:20:25.881413 | orchestrator | Tuesday 13 May 2025 20:16:03 +0000 (0:00:08.518) 0:00:35.706 *********** 2025-05-13 20:20:25.881437 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.881450 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.881463 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.881475 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.881488 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.881501 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.881513 | orchestrator | 2025-05-13 20:20:25.881527 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-05-13 20:20:25.881540 | orchestrator | Tuesday 13 May 2025 20:16:04 +0000 (0:00:01.177) 0:00:36.883 *********** 2025-05-13 20:20:25.881553 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.881565 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.881579 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.881587 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.881594 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.881603 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.881610 | orchestrator | 2025-05-13 20:20:25.881618 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-05-13 20:20:25.881626 | orchestrator | Tuesday 13 May 2025 20:16:08 +0000 (0:00:04.039) 0:00:40.923 *********** 2025-05-13 20:20:25.881634 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:20:25.881642 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:20:25.881650 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:20:25.881658 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:20:25.881665 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:20:25.881673 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:20:25.881681 | orchestrator | 2025-05-13 20:20:25.881699 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-13 20:20:25.881706 | orchestrator | Tuesday 13 May 2025 20:16:10 +0000 (0:00:01.785) 0:00:42.708 *********** 2025-05-13 20:20:25.881715 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.881723 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.881731 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.881738 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.881753 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.881761 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.881769 | orchestrator | 2025-05-13 20:20:25.881777 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-05-13 20:20:25.881785 | orchestrator | Tuesday 13 May 2025 20:16:13 +0000 (0:00:03.236) 0:00:45.944 *********** 2025-05-13 20:20:25.881797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 20:20:25.881821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 20:20:25.881830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 20:20:25.881839 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 20:20:25.881859 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 20:20:25.881867 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 20:20:25.881875 | orchestrator | 2025-05-13 20:20:25.881884 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-05-13 20:20:25.881892 | orchestrator | Tuesday 13 May 2025 20:16:16 +0000 (0:00:03.349) 0:00:49.294 *********** 2025-05-13 20:20:25.881900 | orchestrator | [WARNING]: Skipped 2025-05-13 20:20:25.881909 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-05-13 20:20:25.881917 | orchestrator | due to this access issue: 2025-05-13 20:20:25.881926 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-05-13 20:20:25.881934 | orchestrator | a directory 2025-05-13 20:20:25.881942 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 20:20:25.881950 | orchestrator | 2025-05-13 20:20:25.881958 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-13 20:20:25.881970 | orchestrator | Tuesday 13 May 2025 20:16:17 +0000 (0:00:00.920) 0:00:50.214 *********** 2025-05-13 20:20:25.881978 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:20:25.881988 | orchestrator | 2025-05-13 20:20:25.881996 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-05-13 20:20:25.882004 | orchestrator | Tuesday 13 May 2025 20:16:18 +0000 (0:00:01.367) 0:00:51.582 *********** 2025-05-13 20:20:25.882012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 20:20:25.882058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 20:20:25.882073 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 20:20:25.882082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 20:20:25.882098 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 20:20:25.882107 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 20:20:25.882120 | orchestrator | 2025-05-13 20:20:25.882128 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-05-13 20:20:25.882136 | orchestrator | Tuesday 13 May 2025 20:16:22 +0000 (0:00:03.273) 0:00:54.855 *********** 2025-05-13 20:20:25.882145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 20:20:25.882153 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.882165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 20:20:25.882174 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.882182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 20:20:25.882195 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.882204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:20:25.882217 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.882226 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:20:25.882234 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.882242 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:20:25.882250 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.882258 | orchestrator | 2025-05-13 20:20:25.882266 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-05-13 20:20:25.882278 | orchestrator | Tuesday 13 May 2025 20:16:24 +0000 (0:00:02.260) 0:00:57.116 *********** 2025-05-13 20:20:25.882286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 20:20:25.882294 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.882309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 20:20:25.882318 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.882327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 20:20:25.882365 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.882381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:20:25.882394 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.882420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:20:25.882433 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.882447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:20:25.882461 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.882475 | orchestrator | 2025-05-13 20:20:25.882488 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-05-13 20:20:25.882500 | orchestrator | Tuesday 13 May 2025 20:16:27 +0000 (0:00:02.883) 0:00:59.999 *********** 2025-05-13 20:20:25.882508 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.882516 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.882524 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.882532 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.882541 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.882555 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.882576 | orchestrator | 2025-05-13 20:20:25.882589 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-05-13 20:20:25.882607 | orchestrator | Tuesday 13 May 2025 20:16:29 +0000 (0:00:02.491) 0:01:02.491 *********** 2025-05-13 20:20:25.882620 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.882633 | orchestrator | 2025-05-13 20:20:25.882647 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-05-13 20:20:25.882661 | orchestrator | Tuesday 13 May 2025 20:16:29 +0000 (0:00:00.120) 0:01:02.611 *********** 2025-05-13 20:20:25.882674 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.882686 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.882695 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.882703 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.882713 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.882726 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.882739 | orchestrator | 2025-05-13 20:20:25.882752 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-05-13 20:20:25.882765 | orchestrator | Tuesday 13 May 2025 20:16:30 +0000 (0:00:00.732) 0:01:03.344 *********** 2025-05-13 20:20:25.882778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 20:20:25.882792 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.882806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 20:20:25.882826 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.882840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 20:20:25.882862 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.882882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:20:25.882891 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.882899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:20:25.882907 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.882915 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:20:25.882923 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.882931 | orchestrator | 2025-05-13 20:20:25.882938 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-05-13 20:20:25.882946 | orchestrator | Tuesday 13 May 2025 20:16:33 +0000 (0:00:02.382) 0:01:05.726 *********** 2025-05-13 20:20:25.882959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 20:20:25.882967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 20:20:25.882988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 20:20:25.882996 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 20:20:25.883005 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 20:20:25.883018 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 20:20:25.883026 | orchestrator | 2025-05-13 20:20:25.883034 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-05-13 20:20:25.883042 | orchestrator | Tuesday 13 May 2025 20:16:36 +0000 (0:00:03.108) 0:01:08.835 *********** 2025-05-13 20:20:25.883055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 20:20:25.883070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 20:20:25.883079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 20:20:25.883087 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 20:20:25.883099 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 20:20:25.883117 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 20:20:25.883125 | orchestrator | 2025-05-13 20:20:25.883133 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-05-13 20:20:25.883141 | orchestrator | Tuesday 13 May 2025 20:16:42 +0000 (0:00:06.468) 0:01:15.303 *********** 2025-05-13 20:20:25.883155 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:20:25.883163 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.883171 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:20:25.883180 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.883188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:20:25.883196 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.883208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 20:20:25.883221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 20:20:25.883236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 20:20:25.883244 | orchestrator | 2025-05-13 20:20:25.883252 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-05-13 20:20:25.883260 | orchestrator | Tuesday 13 May 2025 20:16:45 +0000 (0:00:02.543) 0:01:17.846 *********** 2025-05-13 20:20:25.883268 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.883276 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.883284 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:20:25.883292 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.883299 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:20:25.883307 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:20:25.883315 | orchestrator | 2025-05-13 20:20:25.883323 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-05-13 20:20:25.883331 | orchestrator | Tuesday 13 May 2025 20:16:47 +0000 (0:00:02.406) 0:01:20.253 *********** 2025-05-13 20:20:25.883395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:20:25.883419 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.883432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:20:25.883440 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.883448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:20:25.883456 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.883471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 20:20:25.883480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 20:20:25.883488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 20:20:25.883503 | orchestrator | 2025-05-13 20:20:25.883511 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-05-13 20:20:25.883519 | orchestrator | Tuesday 13 May 2025 20:16:51 +0000 (0:00:03.562) 0:01:23.816 *********** 2025-05-13 20:20:25.883527 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.883535 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.883542 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.883550 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.883558 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.883572 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.883580 | orchestrator | 2025-05-13 20:20:25.883588 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-05-13 20:20:25.883596 | orchestrator | Tuesday 13 May 2025 20:16:54 +0000 (0:00:02.846) 0:01:26.663 *********** 2025-05-13 20:20:25.883604 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.883611 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.883619 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.883627 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.883635 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.883642 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.883650 | orchestrator | 2025-05-13 20:20:25.883658 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-05-13 20:20:25.883667 | orchestrator | Tuesday 13 May 2025 20:16:56 +0000 (0:00:02.816) 0:01:29.479 *********** 2025-05-13 20:20:25.883681 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.883694 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.883707 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.883720 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.883733 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.883746 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.883758 | orchestrator | 2025-05-13 20:20:25.883770 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-05-13 20:20:25.883783 | orchestrator | Tuesday 13 May 2025 20:16:59 +0000 (0:00:02.549) 0:01:32.029 *********** 2025-05-13 20:20:25.883796 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.883810 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.883823 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.883836 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.883850 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.883864 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.883877 | orchestrator | 2025-05-13 20:20:25.883891 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-05-13 20:20:25.883904 | orchestrator | Tuesday 13 May 2025 20:17:02 +0000 (0:00:03.528) 0:01:35.557 *********** 2025-05-13 20:20:25.883918 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.883932 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.883945 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.883958 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.883972 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.883986 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.883999 | orchestrator | 2025-05-13 20:20:25.884018 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-05-13 20:20:25.884027 | orchestrator | Tuesday 13 May 2025 20:17:05 +0000 (0:00:02.620) 0:01:38.178 *********** 2025-05-13 20:20:25.884034 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.884042 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.884057 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.884065 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.884073 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.884080 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.884088 | orchestrator | 2025-05-13 20:20:25.884096 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-05-13 20:20:25.884103 | orchestrator | Tuesday 13 May 2025 20:17:08 +0000 (0:00:03.279) 0:01:41.458 *********** 2025-05-13 20:20:25.884111 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-13 20:20:25.884119 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.884128 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-13 20:20:25.884135 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.884143 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-13 20:20:25.884151 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.884158 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-13 20:20:25.884166 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.884174 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-13 20:20:25.884182 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.884190 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-13 20:20:25.884197 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.884205 | orchestrator | 2025-05-13 20:20:25.884213 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-05-13 20:20:25.884221 | orchestrator | Tuesday 13 May 2025 20:17:13 +0000 (0:00:04.575) 0:01:46.034 *********** 2025-05-13 20:20:25.884229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:20:25.884238 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.884252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 20:20:25.884260 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.884268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 20:20:25.884286 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.884295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 20:20:25.884303 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.884311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:20:25.884319 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.884331 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:20:25.884366 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.884377 | orchestrator | 2025-05-13 20:20:25.884385 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-05-13 20:20:25.884393 | orchestrator | Tuesday 13 May 2025 20:17:16 +0000 (0:00:03.140) 0:01:49.175 *********** 2025-05-13 20:20:25.884401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:20:25.884415 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.884430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 20:20:25.884438 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.884447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 20:20:25.884455 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.884463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:20:25.884477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 20:20:25.884485 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.884494 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.884507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:20:25.884516 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.884524 | orchestrator | 2025-05-13 20:20:25.884532 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-05-13 20:20:25.884540 | orchestrator | Tuesday 13 May 2025 20:17:19 +0000 (0:00:03.036) 0:01:52.211 *********** 2025-05-13 20:20:25.884547 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.884555 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.884563 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.884570 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.884583 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.884591 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.884599 | orchestrator | 2025-05-13 20:20:25.884607 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-05-13 20:20:25.884615 | orchestrator | Tuesday 13 May 2025 20:17:24 +0000 (0:00:04.464) 0:01:56.676 *********** 2025-05-13 20:20:25.884623 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.884631 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.884639 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.884646 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:20:25.884654 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:20:25.884662 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:20:25.884669 | orchestrator | 2025-05-13 20:20:25.884677 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-05-13 20:20:25.884685 | orchestrator | Tuesday 13 May 2025 20:17:28 +0000 (0:00:04.650) 0:02:01.327 *********** 2025-05-13 20:20:25.884693 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.884700 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.884708 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.884716 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.884723 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.884732 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.884739 | orchestrator | 2025-05-13 20:20:25.884747 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-05-13 20:20:25.884755 | orchestrator | Tuesday 13 May 2025 20:17:31 +0000 (0:00:03.222) 0:02:04.550 *********** 2025-05-13 20:20:25.884763 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.884771 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.884778 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.884786 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.884794 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.884801 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.884809 | orchestrator | 2025-05-13 20:20:25.884817 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-05-13 20:20:25.884825 | orchestrator | Tuesday 13 May 2025 20:17:35 +0000 (0:00:03.129) 0:02:07.680 *********** 2025-05-13 20:20:25.884832 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.884840 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.884848 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.884855 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.884863 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.884871 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.884885 | orchestrator | 2025-05-13 20:20:25.884893 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-05-13 20:20:25.884901 | orchestrator | Tuesday 13 May 2025 20:17:37 +0000 (0:00:02.955) 0:02:10.636 *********** 2025-05-13 20:20:25.884909 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.884916 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.884924 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.884932 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.884939 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.884947 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.884955 | orchestrator | 2025-05-13 20:20:25.884963 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-05-13 20:20:25.884971 | orchestrator | Tuesday 13 May 2025 20:17:39 +0000 (0:00:01.880) 0:02:12.516 *********** 2025-05-13 20:20:25.884979 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.884987 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.884994 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.885002 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.885010 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.885017 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.885025 | orchestrator | 2025-05-13 20:20:25.885037 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-05-13 20:20:25.885046 | orchestrator | Tuesday 13 May 2025 20:17:41 +0000 (0:00:02.090) 0:02:14.607 *********** 2025-05-13 20:20:25.885053 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.885061 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.885069 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.885077 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.885085 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.885093 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.885101 | orchestrator | 2025-05-13 20:20:25.885109 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-05-13 20:20:25.885117 | orchestrator | Tuesday 13 May 2025 20:17:44 +0000 (0:00:02.102) 0:02:16.709 *********** 2025-05-13 20:20:25.885125 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.885132 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.885140 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.885148 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.885156 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.885163 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.885171 | orchestrator | 2025-05-13 20:20:25.885179 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-05-13 20:20:25.885187 | orchestrator | Tuesday 13 May 2025 20:17:46 +0000 (0:00:02.689) 0:02:19.399 *********** 2025-05-13 20:20:25.885195 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.885203 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.885211 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.885218 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.885227 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.885235 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.885243 | orchestrator | 2025-05-13 20:20:25.885251 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-05-13 20:20:25.885259 | orchestrator | Tuesday 13 May 2025 20:17:49 +0000 (0:00:02.860) 0:02:22.260 *********** 2025-05-13 20:20:25.885267 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-13 20:20:25.885276 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.885284 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-13 20:20:25.885292 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.885306 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-13 20:20:25.885319 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.885328 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-13 20:20:25.885354 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.885364 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-13 20:20:25.885372 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.885380 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-13 20:20:25.885388 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.885395 | orchestrator | 2025-05-13 20:20:25.885403 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-05-13 20:20:25.885410 | orchestrator | Tuesday 13 May 2025 20:17:52 +0000 (0:00:02.577) 0:02:24.837 *********** 2025-05-13 20:20:25.885419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 20:20:25.885427 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.885441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 20:20:25.885449 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.885457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-13 20:20:25.885466 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.885479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:20:25.885493 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.885501 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:20:25.885509 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.885517 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-13 20:20:25.885525 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.885533 | orchestrator | 2025-05-13 20:20:25.885541 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-05-13 20:20:25.885549 | orchestrator | Tuesday 13 May 2025 20:17:54 +0000 (0:00:02.023) 0:02:26.861 *********** 2025-05-13 20:20:25.885561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 20:20:25.885570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 20:20:25.885591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-13 20:20:25.885601 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 20:20:25.885609 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 20:20:25.885625 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-13 20:20:25.885633 | orchestrator | 2025-05-13 20:20:25.885642 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-13 20:20:25.885650 | orchestrator | Tuesday 13 May 2025 20:17:58 +0000 (0:00:04.316) 0:02:31.178 *********** 2025-05-13 20:20:25.885658 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:25.885666 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:25.885674 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:25.885682 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:20:25.885690 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:20:25.885703 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:20:25.885711 | orchestrator | 2025-05-13 20:20:25.885719 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-05-13 20:20:25.885727 | orchestrator | Tuesday 13 May 2025 20:17:59 +0000 (0:00:00.596) 0:02:31.775 *********** 2025-05-13 20:20:25.885735 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:20:25.885742 | orchestrator | 2025-05-13 20:20:25.885750 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-05-13 20:20:25.885758 | orchestrator | Tuesday 13 May 2025 20:18:01 +0000 (0:00:01.973) 0:02:33.748 *********** 2025-05-13 20:20:25.885766 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:20:25.885774 | orchestrator | 2025-05-13 20:20:25.885781 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-05-13 20:20:25.885789 | orchestrator | Tuesday 13 May 2025 20:18:02 +0000 (0:00:01.727) 0:02:35.475 *********** 2025-05-13 20:20:25.885797 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:20:25.885804 | orchestrator | 2025-05-13 20:20:25.885812 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-13 20:20:25.885820 | orchestrator | Tuesday 13 May 2025 20:18:42 +0000 (0:00:39.197) 0:03:14.673 *********** 2025-05-13 20:20:25.885828 | orchestrator | 2025-05-13 20:20:25.885835 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-13 20:20:25.885843 | orchestrator | Tuesday 13 May 2025 20:18:42 +0000 (0:00:00.112) 0:03:14.785 *********** 2025-05-13 20:20:25.885851 | orchestrator | 2025-05-13 20:20:25.885859 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-13 20:20:25.885873 | orchestrator | Tuesday 13 May 2025 20:18:42 +0000 (0:00:00.321) 0:03:15.106 *********** 2025-05-13 20:20:25.885881 | orchestrator | 2025-05-13 20:20:25.885889 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-13 20:20:25.885897 | orchestrator | Tuesday 13 May 2025 20:18:42 +0000 (0:00:00.077) 0:03:15.183 *********** 2025-05-13 20:20:25.885904 | orchestrator | 2025-05-13 20:20:25.885912 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-13 20:20:25.885920 | orchestrator | Tuesday 13 May 2025 20:18:42 +0000 (0:00:00.072) 0:03:15.256 *********** 2025-05-13 20:20:25.885927 | orchestrator | 2025-05-13 20:20:25.885935 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-13 20:20:25.885943 | orchestrator | Tuesday 13 May 2025 20:18:42 +0000 (0:00:00.076) 0:03:15.333 *********** 2025-05-13 20:20:25.885951 | orchestrator | 2025-05-13 20:20:25.885959 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-05-13 20:20:25.885967 | orchestrator | Tuesday 13 May 2025 20:18:42 +0000 (0:00:00.105) 0:03:15.438 *********** 2025-05-13 20:20:25.885975 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:20:25.885982 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:20:25.885990 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:20:25.885998 | orchestrator | 2025-05-13 20:20:25.886006 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-05-13 20:20:25.886014 | orchestrator | Tuesday 13 May 2025 20:19:12 +0000 (0:00:29.556) 0:03:44.995 *********** 2025-05-13 20:20:25.886049 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:20:25.886057 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:20:25.886065 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:20:25.886073 | orchestrator | 2025-05-13 20:20:25.886081 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:20:25.886089 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-13 20:20:25.886099 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-13 20:20:25.886107 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-13 20:20:25.886121 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-13 20:20:25.886129 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-13 20:20:25.886137 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-13 20:20:25.886145 | orchestrator | 2025-05-13 20:20:25.886153 | orchestrator | 2025-05-13 20:20:25.886161 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:20:25.886169 | orchestrator | Tuesday 13 May 2025 20:20:24 +0000 (0:01:11.815) 0:04:56.810 *********** 2025-05-13 20:20:25.886177 | orchestrator | =============================================================================== 2025-05-13 20:20:25.886189 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 71.82s 2025-05-13 20:20:25.886198 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 39.20s 2025-05-13 20:20:25.886206 | orchestrator | neutron : Restart neutron-server container ----------------------------- 29.56s 2025-05-13 20:20:25.886214 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.52s 2025-05-13 20:20:25.886222 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.47s 2025-05-13 20:20:25.886229 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.35s 2025-05-13 20:20:25.886237 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.65s 2025-05-13 20:20:25.886245 | orchestrator | neutron : Copying over dnsmasq.conf ------------------------------------- 4.58s 2025-05-13 20:20:25.886253 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 4.46s 2025-05-13 20:20:25.886261 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.32s 2025-05-13 20:20:25.886269 | orchestrator | Load and persist kernel modules ----------------------------------------- 4.04s 2025-05-13 20:20:25.886277 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.95s 2025-05-13 20:20:25.886285 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.56s 2025-05-13 20:20:25.886293 | orchestrator | neutron : Copying over mlnx_agent.ini ----------------------------------- 3.53s 2025-05-13 20:20:25.886300 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.50s 2025-05-13 20:20:25.886309 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.35s 2025-05-13 20:20:25.886316 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.34s 2025-05-13 20:20:25.886324 | orchestrator | neutron : Copying over dhcp_agent.ini ----------------------------------- 3.28s 2025-05-13 20:20:25.886332 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.27s 2025-05-13 20:20:25.886400 | orchestrator | Setting sysctl values --------------------------------------------------- 3.24s 2025-05-13 20:20:25.886414 | orchestrator | 2025-05-13 20:20:25 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:20:28.937610 | orchestrator | 2025-05-13 20:20:28 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:20:28.940129 | orchestrator | 2025-05-13 20:20:28 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:20:28.942529 | orchestrator | 2025-05-13 20:20:28 | INFO  | Task 411134eb-f1a0-4411-b0dd-7a2892315894 is in state STARTED 2025-05-13 20:20:28.947961 | orchestrator | 2025-05-13 20:20:28 | INFO  | Task 40580e27-9220-4c51-99b6-ac6c75c77f79 is in state SUCCESS 2025-05-13 20:20:28.949404 | orchestrator | 2025-05-13 20:20:28.949453 | orchestrator | 2025-05-13 20:20:28.949468 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:20:28.949509 | orchestrator | 2025-05-13 20:20:28.949521 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:20:28.949533 | orchestrator | Tuesday 13 May 2025 20:17:36 +0000 (0:00:00.507) 0:00:00.507 *********** 2025-05-13 20:20:28.949544 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:20:28.949555 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:20:28.949566 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:20:28.949577 | orchestrator | 2025-05-13 20:20:28.949588 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:20:28.949599 | orchestrator | Tuesday 13 May 2025 20:17:37 +0000 (0:00:00.362) 0:00:00.869 *********** 2025-05-13 20:20:28.949611 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-05-13 20:20:28.949622 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-05-13 20:20:28.949633 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-05-13 20:20:28.949644 | orchestrator | 2025-05-13 20:20:28.949662 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-05-13 20:20:28.949678 | orchestrator | 2025-05-13 20:20:28.949689 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-13 20:20:28.949700 | orchestrator | Tuesday 13 May 2025 20:17:37 +0000 (0:00:00.465) 0:00:01.335 *********** 2025-05-13 20:20:28.949711 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:20:28.949722 | orchestrator | 2025-05-13 20:20:28.949733 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-05-13 20:20:28.949744 | orchestrator | Tuesday 13 May 2025 20:17:38 +0000 (0:00:00.602) 0:00:01.937 *********** 2025-05-13 20:20:28.949754 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-05-13 20:20:28.949765 | orchestrator | 2025-05-13 20:20:28.949776 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-05-13 20:20:28.949786 | orchestrator | Tuesday 13 May 2025 20:17:41 +0000 (0:00:03.404) 0:00:05.342 *********** 2025-05-13 20:20:28.949797 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-05-13 20:20:28.949808 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-05-13 20:20:28.949819 | orchestrator | 2025-05-13 20:20:28.949830 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-05-13 20:20:28.949841 | orchestrator | Tuesday 13 May 2025 20:17:48 +0000 (0:00:06.429) 0:00:11.771 *********** 2025-05-13 20:20:28.949852 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-13 20:20:28.949863 | orchestrator | 2025-05-13 20:20:28.949874 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-05-13 20:20:28.949900 | orchestrator | Tuesday 13 May 2025 20:17:51 +0000 (0:00:03.277) 0:00:15.049 *********** 2025-05-13 20:20:28.949911 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-13 20:20:28.949922 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-05-13 20:20:28.949942 | orchestrator | 2025-05-13 20:20:28.949954 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-05-13 20:20:28.949967 | orchestrator | Tuesday 13 May 2025 20:17:55 +0000 (0:00:03.879) 0:00:18.928 *********** 2025-05-13 20:20:28.949979 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-13 20:20:28.949991 | orchestrator | 2025-05-13 20:20:28.950010 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-05-13 20:20:28.950061 | orchestrator | Tuesday 13 May 2025 20:17:58 +0000 (0:00:03.574) 0:00:22.502 *********** 2025-05-13 20:20:28.950074 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-05-13 20:20:28.950086 | orchestrator | 2025-05-13 20:20:28.950101 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-05-13 20:20:28.950120 | orchestrator | Tuesday 13 May 2025 20:18:02 +0000 (0:00:03.558) 0:00:26.061 *********** 2025-05-13 20:20:28.950167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 20:20:28.950228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 20:20:28.950260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 20:20:28.950281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 20:20:28.950312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 20:20:28.950331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 20:20:28.950442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.950475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.950493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.950511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.950529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.950539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.950556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.950566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.950584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.950596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.950606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.950620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.950631 | orchestrator | 2025-05-13 20:20:28.950641 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-05-13 20:20:28.950651 | orchestrator | Tuesday 13 May 2025 20:18:04 +0000 (0:00:02.637) 0:00:28.699 *********** 2025-05-13 20:20:28.950666 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:28.950677 | orchestrator | 2025-05-13 20:20:28.950686 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-05-13 20:20:28.950695 | orchestrator | Tuesday 13 May 2025 20:18:05 +0000 (0:00:00.243) 0:00:28.942 *********** 2025-05-13 20:20:28.950705 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:28.950715 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:28.950724 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:28.950734 | orchestrator | 2025-05-13 20:20:28.950743 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-13 20:20:28.950753 | orchestrator | Tuesday 13 May 2025 20:18:05 +0000 (0:00:00.378) 0:00:29.320 *********** 2025-05-13 20:20:28.950762 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:20:28.950772 | orchestrator | 2025-05-13 20:20:28.950784 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-05-13 20:20:28.950800 | orchestrator | Tuesday 13 May 2025 20:18:06 +0000 (0:00:00.767) 0:00:30.088 *********** 2025-05-13 20:20:28.950816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 20:20:28.950842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 20:20:28.950874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 20:20:28.950900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 20:20:28.950928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 20:20:28.950946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 20:20:28.950965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.950996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.951015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.951034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.951069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.951088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.951105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.951144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.951162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.951179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.951196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.951236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.951253 | orchestrator | 2025-05-13 20:20:28.951269 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-05-13 20:20:28.951286 | orchestrator | Tuesday 13 May 2025 20:18:12 +0000 (0:00:06.198) 0:00:36.287 *********** 2025-05-13 20:20:28.951297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 20:20:28.951307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 20:20:28.951325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.951336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.951389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.951405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.951416 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:28.951426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 20:20:28.951437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 20:20:28.951454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.951465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.951481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.951496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.951506 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:28.951516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 20:20:28.951527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 20:20:28.951546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.951557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.951573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.951587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.951597 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:28.951606 | orchestrator | 2025-05-13 20:20:28.951616 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-05-13 20:20:28.951626 | orchestrator | Tuesday 13 May 2025 20:18:13 +0000 (0:00:00.977) 0:00:37.264 *********** 2025-05-13 20:20:28.951636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 20:20:28.951646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 20:20:28.951662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.951672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.951688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.951702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.951713 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:28.951723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 20:20:28.951733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 20:20:28.951748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.951759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.951774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.951784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.951794 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:28.951808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 20:20:28.951819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 20:20:28.951829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.951845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.951861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.951872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.951881 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:28.951891 | orchestrator | 2025-05-13 20:20:28.951900 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-05-13 20:20:28.951910 | orchestrator | Tuesday 13 May 2025 20:18:14 +0000 (0:00:01.486) 0:00:38.751 *********** 2025-05-13 20:20:28.951925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 20:20:28.951936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 20:20:28.951953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 20:20:28.951969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 20:20:28.951979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 20:20:28.951994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952154 | orchestrator | 2025-05-13 20:20:28.952163 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-05-13 20:20:28.952173 | orchestrator | Tuesday 13 May 2025 20:18:20 +0000 (0:00:06.003) 0:00:44.755 *********** 2025-05-13 20:20:28.952187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 20:20:28.952198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 20:20:28.952208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 20:20:28.952230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952444 | orchestrator | 2025-05-13 20:20:28.952453 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-05-13 20:20:28.952470 | orchestrator | Tuesday 13 May 2025 20:18:37 +0000 (0:00:16.983) 0:01:01.738 *********** 2025-05-13 20:20:28.952486 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-13 20:20:28.952502 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-13 20:20:28.952518 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-13 20:20:28.952531 | orchestrator | 2025-05-13 20:20:28.952545 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-05-13 20:20:28.952559 | orchestrator | Tuesday 13 May 2025 20:18:42 +0000 (0:00:04.545) 0:01:06.284 *********** 2025-05-13 20:20:28.952575 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-13 20:20:28.952589 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-13 20:20:28.952603 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-13 20:20:28.952617 | orchestrator | 2025-05-13 20:20:28.952631 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-05-13 20:20:28.952646 | orchestrator | Tuesday 13 May 2025 20:18:47 +0000 (0:00:04.602) 0:01:10.886 *********** 2025-05-13 20:20:28.952714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 20:20:28.952746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 20:20:28.952776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 20:20:28.952795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.952833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.952844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.952861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.952889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.952900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.952910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.952940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.952950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.952960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.952997 | orchestrator | 2025-05-13 20:20:28.953006 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-05-13 20:20:28.953016 | orchestrator | Tuesday 13 May 2025 20:18:50 +0000 (0:00:03.542) 0:01:14.428 *********** 2025-05-13 20:20:28.953034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 20:20:28.953051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 20:20:28.953061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 20:20:28.953154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 20:20:28.953167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.953177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.953198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.953209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 20:20:28.953219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 20:20:28.953234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.953245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.953255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.953265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.953286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.953296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.953306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.953321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.953332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.953374 | orchestrator | 2025-05-13 20:20:28.953391 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-13 20:20:28.953405 | orchestrator | Tuesday 13 May 2025 20:18:53 +0000 (0:00:02.700) 0:01:17.128 *********** 2025-05-13 20:20:28.953421 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:28.953437 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:28.953453 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:28.953469 | orchestrator | 2025-05-13 20:20:28.953481 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-05-13 20:20:28.953500 | orchestrator | Tuesday 13 May 2025 20:18:53 +0000 (0:00:00.395) 0:01:17.524 *********** 2025-05-13 20:20:28.953515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 20:20:28.953526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 20:20:28.953537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.953547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.953565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.953575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.953591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 20:20:28.953601 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:28.953616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 20:20:28.953626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.953636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.953652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.953662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.953678 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:28.953688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-13 20:20:28.953703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-13 20:20:28.953713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.953723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.953733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.953748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:20:28.953765 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:28.953774 | orchestrator | 2025-05-13 20:20:28.953784 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-05-13 20:20:28.953794 | orchestrator | Tuesday 13 May 2025 20:18:55 +0000 (0:00:01.344) 0:01:18.868 *********** 2025-05-13 20:20:28.953804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 20:20:28.953818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 20:20:28.953828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-13 20:20:28.953839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 20:20:28.953854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 20:20:28.953871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-13 20:20:28.953881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.953900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.953910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.953920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.953935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.953951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.953962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.953972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.953986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.953996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.954006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.954058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:20:28.954077 | orchestrator | 2025-05-13 20:20:28.954087 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-13 20:20:28.954096 | orchestrator | Tuesday 13 May 2025 20:18:59 +0000 (0:00:04.502) 0:01:23.371 *********** 2025-05-13 20:20:28.954106 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:20:28.954116 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:20:28.954126 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:20:28.954135 | orchestrator | 2025-05-13 20:20:28.954145 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-05-13 20:20:28.954154 | orchestrator | Tuesday 13 May 2025 20:19:00 +0000 (0:00:00.569) 0:01:23.940 *********** 2025-05-13 20:20:28.954164 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-05-13 20:20:28.954173 | orchestrator | 2025-05-13 20:20:28.954183 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-05-13 20:20:28.954192 | orchestrator | Tuesday 13 May 2025 20:19:02 +0000 (0:00:02.509) 0:01:26.449 *********** 2025-05-13 20:20:28.954202 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-13 20:20:28.954211 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-05-13 20:20:28.954221 | orchestrator | 2025-05-13 20:20:28.954231 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-05-13 20:20:28.954240 | orchestrator | Tuesday 13 May 2025 20:19:04 +0000 (0:00:02.264) 0:01:28.713 *********** 2025-05-13 20:20:28.954250 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:20:28.954259 | orchestrator | 2025-05-13 20:20:28.954269 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-13 20:20:28.954278 | orchestrator | Tuesday 13 May 2025 20:19:19 +0000 (0:00:14.307) 0:01:43.021 *********** 2025-05-13 20:20:28.954287 | orchestrator | 2025-05-13 20:20:28.954297 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-13 20:20:28.954306 | orchestrator | Tuesday 13 May 2025 20:19:19 +0000 (0:00:00.205) 0:01:43.227 *********** 2025-05-13 20:20:28.954316 | orchestrator | 2025-05-13 20:20:28.954325 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-13 20:20:28.954335 | orchestrator | Tuesday 13 May 2025 20:19:19 +0000 (0:00:00.116) 0:01:43.343 *********** 2025-05-13 20:20:28.954365 | orchestrator | 2025-05-13 20:20:28.954375 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-05-13 20:20:28.954385 | orchestrator | Tuesday 13 May 2025 20:19:19 +0000 (0:00:00.154) 0:01:43.497 *********** 2025-05-13 20:20:28.954394 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:20:28.954404 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:20:28.954413 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:20:28.954423 | orchestrator | 2025-05-13 20:20:28.954432 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-05-13 20:20:28.954446 | orchestrator | Tuesday 13 May 2025 20:19:31 +0000 (0:00:11.483) 0:01:54.981 *********** 2025-05-13 20:20:28.954456 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:20:28.954465 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:20:28.954475 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:20:28.954484 | orchestrator | 2025-05-13 20:20:28.954494 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-05-13 20:20:28.954503 | orchestrator | Tuesday 13 May 2025 20:19:38 +0000 (0:00:07.641) 0:02:02.622 *********** 2025-05-13 20:20:28.954513 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:20:28.954522 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:20:28.954531 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:20:28.954541 | orchestrator | 2025-05-13 20:20:28.954550 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-05-13 20:20:28.954566 | orchestrator | Tuesday 13 May 2025 20:19:46 +0000 (0:00:07.339) 0:02:09.962 *********** 2025-05-13 20:20:28.954576 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:20:28.954585 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:20:28.954594 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:20:28.954604 | orchestrator | 2025-05-13 20:20:28.954613 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-05-13 20:20:28.954622 | orchestrator | Tuesday 13 May 2025 20:19:58 +0000 (0:00:11.920) 0:02:21.882 *********** 2025-05-13 20:20:28.954632 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:20:28.954641 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:20:28.954650 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:20:28.954660 | orchestrator | 2025-05-13 20:20:28.954669 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-05-13 20:20:28.954678 | orchestrator | Tuesday 13 May 2025 20:20:08 +0000 (0:00:10.707) 0:02:32.590 *********** 2025-05-13 20:20:28.954688 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:20:28.954697 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:20:28.954706 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:20:28.954716 | orchestrator | 2025-05-13 20:20:28.954725 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-05-13 20:20:28.954735 | orchestrator | Tuesday 13 May 2025 20:20:19 +0000 (0:00:11.175) 0:02:43.765 *********** 2025-05-13 20:20:28.954744 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:20:28.954754 | orchestrator | 2025-05-13 20:20:28.954763 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:20:28.954773 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-13 20:20:28.954783 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-13 20:20:28.954793 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-13 20:20:28.954802 | orchestrator | 2025-05-13 20:20:28.954812 | orchestrator | 2025-05-13 20:20:28.954827 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:20:28.954837 | orchestrator | Tuesday 13 May 2025 20:20:27 +0000 (0:00:07.293) 0:02:51.059 *********** 2025-05-13 20:20:28.954846 | orchestrator | =============================================================================== 2025-05-13 20:20:28.954855 | orchestrator | designate : Copying over designate.conf -------------------------------- 16.98s 2025-05-13 20:20:28.954865 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.31s 2025-05-13 20:20:28.954875 | orchestrator | designate : Restart designate-producer container ----------------------- 11.92s 2025-05-13 20:20:28.954884 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 11.48s 2025-05-13 20:20:28.954893 | orchestrator | designate : Restart designate-worker container ------------------------- 11.18s 2025-05-13 20:20:28.954904 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.71s 2025-05-13 20:20:28.954920 | orchestrator | designate : Restart designate-api container ----------------------------- 7.64s 2025-05-13 20:20:28.954943 | orchestrator | designate : Restart designate-central container ------------------------- 7.34s 2025-05-13 20:20:28.954962 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.29s 2025-05-13 20:20:28.954978 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.43s 2025-05-13 20:20:28.954993 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.20s 2025-05-13 20:20:28.955008 | orchestrator | designate : Copying over config.json files for services ----------------- 6.00s 2025-05-13 20:20:28.955022 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.60s 2025-05-13 20:20:28.955046 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.55s 2025-05-13 20:20:28.955060 | orchestrator | designate : Check designate containers ---------------------------------- 4.50s 2025-05-13 20:20:28.955074 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.88s 2025-05-13 20:20:28.955089 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.57s 2025-05-13 20:20:28.955104 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.56s 2025-05-13 20:20:28.955118 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.54s 2025-05-13 20:20:28.955133 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.40s 2025-05-13 20:20:28.955149 | orchestrator | 2025-05-13 20:20:28 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:20:28.955172 | orchestrator | 2025-05-13 20:20:28 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:20:31.992043 | orchestrator | 2025-05-13 20:20:31 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:20:31.992169 | orchestrator | 2025-05-13 20:20:31 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:20:31.995180 | orchestrator | 2025-05-13 20:20:31 | INFO  | Task 411134eb-f1a0-4411-b0dd-7a2892315894 is in state STARTED 2025-05-13 20:20:31.995706 | orchestrator | 2025-05-13 20:20:31 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:20:31.995738 | orchestrator | 2025-05-13 20:20:31 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:20:35.036482 | orchestrator | 2025-05-13 20:20:35 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:20:35.036841 | orchestrator | 2025-05-13 20:20:35 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:20:35.037544 | orchestrator | 2025-05-13 20:20:35 | INFO  | Task 411134eb-f1a0-4411-b0dd-7a2892315894 is in state SUCCESS 2025-05-13 20:20:35.038695 | orchestrator | 2025-05-13 20:20:35 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:20:35.039857 | orchestrator | 2025-05-13 20:20:35 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:20:35.040547 | orchestrator | 2025-05-13 20:20:35 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:20:38.092786 | orchestrator | 2025-05-13 20:20:38 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:20:38.093397 | orchestrator | 2025-05-13 20:20:38 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:20:38.094654 | orchestrator | 2025-05-13 20:20:38 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:20:38.098693 | orchestrator | 2025-05-13 20:20:38 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:20:38.098735 | orchestrator | 2025-05-13 20:20:38 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:20:41.138454 | orchestrator | 2025-05-13 20:20:41 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:20:41.138957 | orchestrator | 2025-05-13 20:20:41 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:20:41.139451 | orchestrator | 2025-05-13 20:20:41 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:20:41.140106 | orchestrator | 2025-05-13 20:20:41 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:20:41.140233 | orchestrator | 2025-05-13 20:20:41 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:20:44.181808 | orchestrator | 2025-05-13 20:20:44 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:20:44.184097 | orchestrator | 2025-05-13 20:20:44 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:20:44.184877 | orchestrator | 2025-05-13 20:20:44 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:20:44.185633 | orchestrator | 2025-05-13 20:20:44 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:20:44.185669 | orchestrator | 2025-05-13 20:20:44 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:20:47.232694 | orchestrator | 2025-05-13 20:20:47 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:20:47.235092 | orchestrator | 2025-05-13 20:20:47 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:20:47.237581 | orchestrator | 2025-05-13 20:20:47 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:20:47.239712 | orchestrator | 2025-05-13 20:20:47 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:20:47.239739 | orchestrator | 2025-05-13 20:20:47 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:20:50.285750 | orchestrator | 2025-05-13 20:20:50 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:20:50.288538 | orchestrator | 2025-05-13 20:20:50 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:20:50.290004 | orchestrator | 2025-05-13 20:20:50 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:20:50.291640 | orchestrator | 2025-05-13 20:20:50 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:20:50.291701 | orchestrator | 2025-05-13 20:20:50 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:20:53.351234 | orchestrator | 2025-05-13 20:20:53 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:20:53.352517 | orchestrator | 2025-05-13 20:20:53 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:20:53.353879 | orchestrator | 2025-05-13 20:20:53 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:20:53.355540 | orchestrator | 2025-05-13 20:20:53 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:20:53.355575 | orchestrator | 2025-05-13 20:20:53 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:20:56.419545 | orchestrator | 2025-05-13 20:20:56 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:20:56.421753 | orchestrator | 2025-05-13 20:20:56 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:20:56.423507 | orchestrator | 2025-05-13 20:20:56 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:20:56.426556 | orchestrator | 2025-05-13 20:20:56 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:20:56.427045 | orchestrator | 2025-05-13 20:20:56 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:20:59.469897 | orchestrator | 2025-05-13 20:20:59 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:20:59.471238 | orchestrator | 2025-05-13 20:20:59 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:20:59.471270 | orchestrator | 2025-05-13 20:20:59 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:20:59.472295 | orchestrator | 2025-05-13 20:20:59 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:20:59.472343 | orchestrator | 2025-05-13 20:20:59 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:21:02.501928 | orchestrator | 2025-05-13 20:21:02 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:21:02.502350 | orchestrator | 2025-05-13 20:21:02 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:21:02.503679 | orchestrator | 2025-05-13 20:21:02 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:21:02.504865 | orchestrator | 2025-05-13 20:21:02 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:21:02.505030 | orchestrator | 2025-05-13 20:21:02 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:21:05.555319 | orchestrator | 2025-05-13 20:21:05 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:21:05.556946 | orchestrator | 2025-05-13 20:21:05 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:21:05.560075 | orchestrator | 2025-05-13 20:21:05 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:21:05.561873 | orchestrator | 2025-05-13 20:21:05 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:21:05.561903 | orchestrator | 2025-05-13 20:21:05 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:21:08.604244 | orchestrator | 2025-05-13 20:21:08 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:21:08.608598 | orchestrator | 2025-05-13 20:21:08 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:21:08.608670 | orchestrator | 2025-05-13 20:21:08 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:21:08.609589 | orchestrator | 2025-05-13 20:21:08 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:21:08.609632 | orchestrator | 2025-05-13 20:21:08 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:21:11.659475 | orchestrator | 2025-05-13 20:21:11 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:21:11.659583 | orchestrator | 2025-05-13 20:21:11 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:21:11.661646 | orchestrator | 2025-05-13 20:21:11 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:21:11.661978 | orchestrator | 2025-05-13 20:21:11 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:21:11.663799 | orchestrator | 2025-05-13 20:21:11 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:21:14.713650 | orchestrator | 2025-05-13 20:21:14 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:21:14.715830 | orchestrator | 2025-05-13 20:21:14 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:21:14.717264 | orchestrator | 2025-05-13 20:21:14 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:21:14.719131 | orchestrator | 2025-05-13 20:21:14 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:21:14.719168 | orchestrator | 2025-05-13 20:21:14 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:21:17.768637 | orchestrator | 2025-05-13 20:21:17 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:21:17.770087 | orchestrator | 2025-05-13 20:21:17 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:21:17.771699 | orchestrator | 2025-05-13 20:21:17 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:21:17.773464 | orchestrator | 2025-05-13 20:21:17 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:21:17.773497 | orchestrator | 2025-05-13 20:21:17 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:21:20.824762 | orchestrator | 2025-05-13 20:21:20 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:21:20.828189 | orchestrator | 2025-05-13 20:21:20 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:21:20.830639 | orchestrator | 2025-05-13 20:21:20 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:21:20.833315 | orchestrator | 2025-05-13 20:21:20 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:21:20.833605 | orchestrator | 2025-05-13 20:21:20 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:21:23.880977 | orchestrator | 2025-05-13 20:21:23 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:21:23.883145 | orchestrator | 2025-05-13 20:21:23 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:21:23.886122 | orchestrator | 2025-05-13 20:21:23 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:21:23.887858 | orchestrator | 2025-05-13 20:21:23 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:21:23.888045 | orchestrator | 2025-05-13 20:21:23 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:21:26.943955 | orchestrator | 2025-05-13 20:21:26 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:21:26.945945 | orchestrator | 2025-05-13 20:21:26 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:21:26.947919 | orchestrator | 2025-05-13 20:21:26 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:21:26.949721 | orchestrator | 2025-05-13 20:21:26 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:21:26.949737 | orchestrator | 2025-05-13 20:21:26 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:21:30.008841 | orchestrator | 2025-05-13 20:21:30 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:21:30.010922 | orchestrator | 2025-05-13 20:21:30 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:21:30.014070 | orchestrator | 2025-05-13 20:21:30 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:21:30.017788 | orchestrator | 2025-05-13 20:21:30 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:21:30.017835 | orchestrator | 2025-05-13 20:21:30 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:21:33.068966 | orchestrator | 2025-05-13 20:21:33 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:21:33.072232 | orchestrator | 2025-05-13 20:21:33 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:21:33.074689 | orchestrator | 2025-05-13 20:21:33 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:21:33.076950 | orchestrator | 2025-05-13 20:21:33 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:21:33.077150 | orchestrator | 2025-05-13 20:21:33 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:21:36.128605 | orchestrator | 2025-05-13 20:21:36 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:21:36.129031 | orchestrator | 2025-05-13 20:21:36 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:21:36.129993 | orchestrator | 2025-05-13 20:21:36 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:21:36.130878 | orchestrator | 2025-05-13 20:21:36 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:21:36.130928 | orchestrator | 2025-05-13 20:21:36 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:21:39.200314 | orchestrator | 2025-05-13 20:21:39 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:21:39.201096 | orchestrator | 2025-05-13 20:21:39 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:21:39.203593 | orchestrator | 2025-05-13 20:21:39 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:21:39.206284 | orchestrator | 2025-05-13 20:21:39 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:21:39.206332 | orchestrator | 2025-05-13 20:21:39 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:21:42.254783 | orchestrator | 2025-05-13 20:21:42 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:21:42.257117 | orchestrator | 2025-05-13 20:21:42 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:21:42.259186 | orchestrator | 2025-05-13 20:21:42 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:21:42.261153 | orchestrator | 2025-05-13 20:21:42 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:21:42.261206 | orchestrator | 2025-05-13 20:21:42 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:21:45.312269 | orchestrator | 2025-05-13 20:21:45 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:21:45.313723 | orchestrator | 2025-05-13 20:21:45 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:21:45.315554 | orchestrator | 2025-05-13 20:21:45 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:21:45.316737 | orchestrator | 2025-05-13 20:21:45 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:21:45.316818 | orchestrator | 2025-05-13 20:21:45 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:21:48.369331 | orchestrator | 2025-05-13 20:21:48 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:21:48.371443 | orchestrator | 2025-05-13 20:21:48 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:21:48.373867 | orchestrator | 2025-05-13 20:21:48 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:21:48.376009 | orchestrator | 2025-05-13 20:21:48 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:21:48.376041 | orchestrator | 2025-05-13 20:21:48 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:21:51.415745 | orchestrator | 2025-05-13 20:21:51 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:21:51.418158 | orchestrator | 2025-05-13 20:21:51 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:21:51.420770 | orchestrator | 2025-05-13 20:21:51 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:21:51.422476 | orchestrator | 2025-05-13 20:21:51 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:21:51.422604 | orchestrator | 2025-05-13 20:21:51 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:21:54.463912 | orchestrator | 2025-05-13 20:21:54 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:21:54.465607 | orchestrator | 2025-05-13 20:21:54 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:21:54.467656 | orchestrator | 2025-05-13 20:21:54 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:21:54.469103 | orchestrator | 2025-05-13 20:21:54 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:21:54.469119 | orchestrator | 2025-05-13 20:21:54 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:21:57.517059 | orchestrator | 2025-05-13 20:21:57 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:21:57.517547 | orchestrator | 2025-05-13 20:21:57 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:21:57.519235 | orchestrator | 2025-05-13 20:21:57 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:21:57.520194 | orchestrator | 2025-05-13 20:21:57 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:21:57.520489 | orchestrator | 2025-05-13 20:21:57 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:22:00.568490 | orchestrator | 2025-05-13 20:22:00 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:22:00.569082 | orchestrator | 2025-05-13 20:22:00 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:22:00.569614 | orchestrator | 2025-05-13 20:22:00 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:22:00.570459 | orchestrator | 2025-05-13 20:22:00 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:22:00.570553 | orchestrator | 2025-05-13 20:22:00 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:22:03.622773 | orchestrator | 2025-05-13 20:22:03 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:22:03.624189 | orchestrator | 2025-05-13 20:22:03 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:22:03.625506 | orchestrator | 2025-05-13 20:22:03 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:22:03.627083 | orchestrator | 2025-05-13 20:22:03 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:22:03.628231 | orchestrator | 2025-05-13 20:22:03 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:22:06.681793 | orchestrator | 2025-05-13 20:22:06 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:22:06.684417 | orchestrator | 2025-05-13 20:22:06 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:22:06.686456 | orchestrator | 2025-05-13 20:22:06 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:22:06.689373 | orchestrator | 2025-05-13 20:22:06 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:22:06.689460 | orchestrator | 2025-05-13 20:22:06 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:22:09.737917 | orchestrator | 2025-05-13 20:22:09 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:22:09.740774 | orchestrator | 2025-05-13 20:22:09 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:22:09.743073 | orchestrator | 2025-05-13 20:22:09 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state STARTED 2025-05-13 20:22:09.746165 | orchestrator | 2025-05-13 20:22:09 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:22:09.746212 | orchestrator | 2025-05-13 20:22:09 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:22:12.795005 | orchestrator | 2025-05-13 20:22:12 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:22:12.797236 | orchestrator | 2025-05-13 20:22:12 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:22:12.799589 | orchestrator | 2025-05-13 20:22:12 | INFO  | Task 2e0f9613-9c8d-474e-ad81-cedb71746110 is in state SUCCESS 2025-05-13 20:22:12.801251 | orchestrator | 2025-05-13 20:22:12.801283 | orchestrator | 2025-05-13 20:22:12.801294 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:22:12.801303 | orchestrator | 2025-05-13 20:22:12.801312 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:22:12.801320 | orchestrator | Tuesday 13 May 2025 20:20:31 +0000 (0:00:00.169) 0:00:00.169 *********** 2025-05-13 20:22:12.801329 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:22:12.801338 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:22:12.801346 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:22:12.801354 | orchestrator | 2025-05-13 20:22:12.801362 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:22:12.801370 | orchestrator | Tuesday 13 May 2025 20:20:31 +0000 (0:00:00.290) 0:00:00.460 *********** 2025-05-13 20:22:12.801378 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-05-13 20:22:12.801413 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-05-13 20:22:12.801429 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-05-13 20:22:12.801437 | orchestrator | 2025-05-13 20:22:12.801445 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-05-13 20:22:12.801453 | orchestrator | 2025-05-13 20:22:12.801462 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-05-13 20:22:12.801470 | orchestrator | Tuesday 13 May 2025 20:20:32 +0000 (0:00:00.604) 0:00:01.064 *********** 2025-05-13 20:22:12.801478 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:22:12.801501 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:22:12.801509 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:22:12.801517 | orchestrator | 2025-05-13 20:22:12.801525 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:22:12.801533 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:22:12.801543 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:22:12.801551 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:22:12.801558 | orchestrator | 2025-05-13 20:22:12.801566 | orchestrator | 2025-05-13 20:22:12.801574 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:22:12.801582 | orchestrator | Tuesday 13 May 2025 20:20:33 +0000 (0:00:00.780) 0:00:01.844 *********** 2025-05-13 20:22:12.801590 | orchestrator | =============================================================================== 2025-05-13 20:22:12.801597 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.78s 2025-05-13 20:22:12.801605 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2025-05-13 20:22:12.801645 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-05-13 20:22:12.801653 | orchestrator | 2025-05-13 20:22:12.801660 | orchestrator | 2025-05-13 20:22:12.801733 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:22:12.801762 | orchestrator | 2025-05-13 20:22:12.801771 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:22:12.801778 | orchestrator | Tuesday 13 May 2025 20:20:16 +0000 (0:00:00.262) 0:00:00.262 *********** 2025-05-13 20:22:12.801786 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:22:12.801794 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:22:12.801802 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:22:12.801809 | orchestrator | 2025-05-13 20:22:12.801817 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:22:12.801825 | orchestrator | Tuesday 13 May 2025 20:20:16 +0000 (0:00:00.295) 0:00:00.557 *********** 2025-05-13 20:22:12.801833 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-05-13 20:22:12.801841 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-05-13 20:22:12.801850 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-05-13 20:22:12.801860 | orchestrator | 2025-05-13 20:22:12.801869 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-05-13 20:22:12.801878 | orchestrator | 2025-05-13 20:22:12.801887 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-13 20:22:12.801896 | orchestrator | Tuesday 13 May 2025 20:20:16 +0000 (0:00:00.419) 0:00:00.977 *********** 2025-05-13 20:22:12.801905 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:22:12.801913 | orchestrator | 2025-05-13 20:22:12.801922 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-05-13 20:22:12.801931 | orchestrator | Tuesday 13 May 2025 20:20:17 +0000 (0:00:00.509) 0:00:01.486 *********** 2025-05-13 20:22:12.801940 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-05-13 20:22:12.801949 | orchestrator | 2025-05-13 20:22:12.801959 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-05-13 20:22:12.801967 | orchestrator | Tuesday 13 May 2025 20:20:21 +0000 (0:00:03.721) 0:00:05.208 *********** 2025-05-13 20:22:12.801976 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-05-13 20:22:12.801986 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-05-13 20:22:12.801995 | orchestrator | 2025-05-13 20:22:12.802004 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-05-13 20:22:12.802053 | orchestrator | Tuesday 13 May 2025 20:20:27 +0000 (0:00:06.439) 0:00:11.648 *********** 2025-05-13 20:22:12.802065 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-13 20:22:12.802074 | orchestrator | 2025-05-13 20:22:12.802083 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-05-13 20:22:12.802092 | orchestrator | Tuesday 13 May 2025 20:20:30 +0000 (0:00:03.118) 0:00:14.766 *********** 2025-05-13 20:22:12.802114 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-13 20:22:12.802123 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-05-13 20:22:12.802133 | orchestrator | 2025-05-13 20:22:12.802142 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-05-13 20:22:12.802151 | orchestrator | Tuesday 13 May 2025 20:20:34 +0000 (0:00:03.714) 0:00:18.481 *********** 2025-05-13 20:22:12.802160 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-13 20:22:12.802169 | orchestrator | 2025-05-13 20:22:12.802177 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-05-13 20:22:12.802186 | orchestrator | Tuesday 13 May 2025 20:20:37 +0000 (0:00:03.259) 0:00:21.740 *********** 2025-05-13 20:22:12.802195 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-05-13 20:22:12.802203 | orchestrator | 2025-05-13 20:22:12.802213 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-05-13 20:22:12.802221 | orchestrator | Tuesday 13 May 2025 20:20:41 +0000 (0:00:04.138) 0:00:25.879 *********** 2025-05-13 20:22:12.802230 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:22:12.802245 | orchestrator | 2025-05-13 20:22:12.802253 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-05-13 20:22:12.802261 | orchestrator | Tuesday 13 May 2025 20:20:44 +0000 (0:00:03.069) 0:00:28.948 *********** 2025-05-13 20:22:12.802268 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:22:12.802276 | orchestrator | 2025-05-13 20:22:12.802289 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-05-13 20:22:12.802297 | orchestrator | Tuesday 13 May 2025 20:20:48 +0000 (0:00:03.705) 0:00:32.653 *********** 2025-05-13 20:22:12.802305 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:22:12.802313 | orchestrator | 2025-05-13 20:22:12.802320 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-05-13 20:22:12.802328 | orchestrator | Tuesday 13 May 2025 20:20:52 +0000 (0:00:03.566) 0:00:36.219 *********** 2025-05-13 20:22:12.802339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 20:22:12.802351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 20:22:12.802360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 20:22:12.802374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:22:12.802427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:22:12.802436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:22:12.802445 | orchestrator | 2025-05-13 20:22:12.802453 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-05-13 20:22:12.802461 | orchestrator | Tuesday 13 May 2025 20:20:53 +0000 (0:00:01.460) 0:00:37.680 *********** 2025-05-13 20:22:12.802469 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:22:12.802477 | orchestrator | 2025-05-13 20:22:12.802485 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-05-13 20:22:12.802492 | orchestrator | Tuesday 13 May 2025 20:20:53 +0000 (0:00:00.129) 0:00:37.809 *********** 2025-05-13 20:22:12.802500 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:22:12.802508 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:22:12.802516 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:22:12.802524 | orchestrator | 2025-05-13 20:22:12.802532 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-05-13 20:22:12.802539 | orchestrator | Tuesday 13 May 2025 20:20:54 +0000 (0:00:00.613) 0:00:38.423 *********** 2025-05-13 20:22:12.802547 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 20:22:12.802555 | orchestrator | 2025-05-13 20:22:12.802563 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-05-13 20:22:12.802571 | orchestrator | Tuesday 13 May 2025 20:20:55 +0000 (0:00:00.878) 0:00:39.301 *********** 2025-05-13 20:22:12.802579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 20:22:12.802600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 20:22:12.802613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 20:22:12.802621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:22:12.802629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:22:12.802638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:22:12.802651 | orchestrator | 2025-05-13 20:22:12.802659 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-05-13 20:22:12.802667 | orchestrator | Tuesday 13 May 2025 20:20:57 +0000 (0:00:02.659) 0:00:41.960 *********** 2025-05-13 20:22:12.802675 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:22:12.802683 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:22:12.802690 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:22:12.802698 | orchestrator | 2025-05-13 20:22:12.802706 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-13 20:22:12.802719 | orchestrator | Tuesday 13 May 2025 20:20:58 +0000 (0:00:00.290) 0:00:42.251 *********** 2025-05-13 20:22:12.802728 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:22:12.802735 | orchestrator | 2025-05-13 20:22:12.802743 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-05-13 20:22:12.802751 | orchestrator | Tuesday 13 May 2025 20:20:58 +0000 (0:00:00.736) 0:00:42.988 *********** 2025-05-13 20:22:12.802763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 20:22:12.802772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 20:22:12.802780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 20:22:12.802789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:22:12.802809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:22:12.802822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:22:12.802830 | orchestrator | 2025-05-13 20:22:12.802838 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-05-13 20:22:12.802846 | orchestrator | Tuesday 13 May 2025 20:21:01 +0000 (0:00:02.474) 0:00:45.462 *********** 2025-05-13 20:22:12.802854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-13 20:22:12.802863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:22:12.802876 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:22:12.802884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-13 20:22:12.802900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:22:12.802908 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:22:12.802920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-13 20:22:12.802928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:22:12.802936 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:22:12.802944 | orchestrator | 2025-05-13 20:22:12.802952 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-05-13 20:22:12.802959 | orchestrator | Tuesday 13 May 2025 20:21:01 +0000 (0:00:00.582) 0:00:46.045 *********** 2025-05-13 20:22:12.802968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-13 20:22:12.802986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:22:12.802994 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:22:12.803008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-13 20:22:12.803021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:22:12.803029 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:22:12.803037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-13 20:22:12.803051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:22:12.803059 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:22:12.803067 | orchestrator | 2025-05-13 20:22:12.803075 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-05-13 20:22:12.803083 | orchestrator | Tuesday 13 May 2025 20:21:03 +0000 (0:00:01.203) 0:00:47.248 *********** 2025-05-13 20:22:12.803096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 20:22:12.803109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 20:22:12.803118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 20:22:12.803126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:22:12.803143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:22:12.803157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:22:12.803165 | orchestrator | 2025-05-13 20:22:12.803173 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-05-13 20:22:12.803181 | orchestrator | Tuesday 13 May 2025 20:21:05 +0000 (0:00:02.299) 0:00:49.547 *********** 2025-05-13 20:22:12.803193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 20:22:12.803201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 20:22:12.803214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 20:22:12.803223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:22:12.803237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:22:12.803249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:22:12.803258 | orchestrator | 2025-05-13 20:22:12.803265 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-05-13 20:22:12.803273 | orchestrator | Tuesday 13 May 2025 20:21:10 +0000 (0:00:04.910) 0:00:54.458 *********** 2025-05-13 20:22:12.803281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-13 20:22:12.803295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:22:12.803303 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:22:12.803311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-13 20:22:12.803325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:22:12.803333 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:22:12.803345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-13 20:22:12.803354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:22:12.803367 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:22:12.803375 | orchestrator | 2025-05-13 20:22:12.803383 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-05-13 20:22:12.803420 | orchestrator | Tuesday 13 May 2025 20:21:11 +0000 (0:00:00.876) 0:00:55.334 *********** 2025-05-13 20:22:12.803429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 20:22:12.803443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 20:22:12.803456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-13 20:22:12.803465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:22:12.803479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:22:12.803487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:22:12.803495 | orchestrator | 2025-05-13 20:22:12.803503 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-13 20:22:12.803511 | orchestrator | Tuesday 13 May 2025 20:21:13 +0000 (0:00:02.164) 0:00:57.499 *********** 2025-05-13 20:22:12.803519 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:22:12.803527 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:22:12.803534 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:22:12.803542 | orchestrator | 2025-05-13 20:22:12.803550 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-05-13 20:22:12.803558 | orchestrator | Tuesday 13 May 2025 20:21:13 +0000 (0:00:00.293) 0:00:57.792 *********** 2025-05-13 20:22:12.803566 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:22:12.803573 | orchestrator | 2025-05-13 20:22:12.803581 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-05-13 20:22:12.803589 | orchestrator | Tuesday 13 May 2025 20:21:15 +0000 (0:00:02.032) 0:00:59.825 *********** 2025-05-13 20:22:12.803597 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:22:12.803605 | orchestrator | 2025-05-13 20:22:12.803612 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-05-13 20:22:12.803620 | orchestrator | Tuesday 13 May 2025 20:21:17 +0000 (0:00:02.217) 0:01:02.042 *********** 2025-05-13 20:22:12.803633 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:22:12.803641 | orchestrator | 2025-05-13 20:22:12.803648 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-13 20:22:12.803656 | orchestrator | Tuesday 13 May 2025 20:21:33 +0000 (0:00:15.802) 0:01:17.844 *********** 2025-05-13 20:22:12.803664 | orchestrator | 2025-05-13 20:22:12.803672 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-13 20:22:12.803679 | orchestrator | Tuesday 13 May 2025 20:21:33 +0000 (0:00:00.092) 0:01:17.936 *********** 2025-05-13 20:22:12.803687 | orchestrator | 2025-05-13 20:22:12.803695 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-13 20:22:12.803703 | orchestrator | Tuesday 13 May 2025 20:21:33 +0000 (0:00:00.071) 0:01:18.007 *********** 2025-05-13 20:22:12.803716 | orchestrator | 2025-05-13 20:22:12.803724 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-05-13 20:22:12.803732 | orchestrator | Tuesday 13 May 2025 20:21:33 +0000 (0:00:00.070) 0:01:18.078 *********** 2025-05-13 20:22:12.803740 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:22:12.803748 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:22:12.803755 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:22:12.803763 | orchestrator | 2025-05-13 20:22:12.803771 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-05-13 20:22:12.803779 | orchestrator | Tuesday 13 May 2025 20:21:57 +0000 (0:00:23.245) 0:01:41.324 *********** 2025-05-13 20:22:12.803786 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:22:12.803798 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:22:12.803806 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:22:12.803814 | orchestrator | 2025-05-13 20:22:12.803821 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:22:12.803830 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-13 20:22:12.803838 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 20:22:12.803846 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 20:22:12.803854 | orchestrator | 2025-05-13 20:22:12.803862 | orchestrator | 2025-05-13 20:22:12.803870 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:22:12.803877 | orchestrator | Tuesday 13 May 2025 20:22:10 +0000 (0:00:12.926) 0:01:54.251 *********** 2025-05-13 20:22:12.803885 | orchestrator | =============================================================================== 2025-05-13 20:22:12.803893 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 23.25s 2025-05-13 20:22:12.803900 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.80s 2025-05-13 20:22:12.803908 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 12.93s 2025-05-13 20:22:12.803916 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.44s 2025-05-13 20:22:12.803924 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.91s 2025-05-13 20:22:12.803931 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.14s 2025-05-13 20:22:12.803939 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.72s 2025-05-13 20:22:12.803947 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.71s 2025-05-13 20:22:12.803955 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.71s 2025-05-13 20:22:12.803962 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.57s 2025-05-13 20:22:12.803970 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.26s 2025-05-13 20:22:12.803978 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.12s 2025-05-13 20:22:12.803986 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.07s 2025-05-13 20:22:12.803993 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.66s 2025-05-13 20:22:12.804001 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.47s 2025-05-13 20:22:12.804009 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.30s 2025-05-13 20:22:12.804016 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.22s 2025-05-13 20:22:12.804024 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.16s 2025-05-13 20:22:12.804032 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.03s 2025-05-13 20:22:12.804048 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.46s 2025-05-13 20:22:12.804056 | orchestrator | 2025-05-13 20:22:12 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:22:12.804063 | orchestrator | 2025-05-13 20:22:12 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:22:15.856284 | orchestrator | 2025-05-13 20:22:15 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:22:15.857837 | orchestrator | 2025-05-13 20:22:15 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:22:15.860186 | orchestrator | 2025-05-13 20:22:15 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:22:15.860246 | orchestrator | 2025-05-13 20:22:15 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:22:18.909279 | orchestrator | 2025-05-13 20:22:18 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:22:18.909648 | orchestrator | 2025-05-13 20:22:18 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:22:18.910344 | orchestrator | 2025-05-13 20:22:18 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:22:18.911574 | orchestrator | 2025-05-13 20:22:18 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:22:21.960804 | orchestrator | 2025-05-13 20:22:21 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:22:21.960970 | orchestrator | 2025-05-13 20:22:21 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:22:21.962563 | orchestrator | 2025-05-13 20:22:21 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:22:21.962611 | orchestrator | 2025-05-13 20:22:21 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:22:24.997514 | orchestrator | 2025-05-13 20:22:24 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:22:25.001281 | orchestrator | 2025-05-13 20:22:25 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:22:25.002410 | orchestrator | 2025-05-13 20:22:25 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:22:25.002451 | orchestrator | 2025-05-13 20:22:25 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:22:28.065689 | orchestrator | 2025-05-13 20:22:28 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:22:28.066695 | orchestrator | 2025-05-13 20:22:28 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:22:28.067826 | orchestrator | 2025-05-13 20:22:28 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:22:28.067861 | orchestrator | 2025-05-13 20:22:28 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:22:31.105757 | orchestrator | 2025-05-13 20:22:31 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:22:31.106225 | orchestrator | 2025-05-13 20:22:31 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:22:31.107461 | orchestrator | 2025-05-13 20:22:31 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:22:31.107511 | orchestrator | 2025-05-13 20:22:31 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:22:34.147905 | orchestrator | 2025-05-13 20:22:34 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:22:34.152029 | orchestrator | 2025-05-13 20:22:34 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:22:34.154285 | orchestrator | 2025-05-13 20:22:34 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:22:34.154824 | orchestrator | 2025-05-13 20:22:34 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:22:37.202302 | orchestrator | 2025-05-13 20:22:37 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:22:37.204314 | orchestrator | 2025-05-13 20:22:37 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:22:37.207524 | orchestrator | 2025-05-13 20:22:37 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:22:37.207564 | orchestrator | 2025-05-13 20:22:37 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:22:40.259542 | orchestrator | 2025-05-13 20:22:40 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:22:40.262601 | orchestrator | 2025-05-13 20:22:40 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state STARTED 2025-05-13 20:22:40.265118 | orchestrator | 2025-05-13 20:22:40 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:22:40.265474 | orchestrator | 2025-05-13 20:22:40 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:22:43.313539 | orchestrator | 2025-05-13 20:22:43 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:22:43.318917 | orchestrator | 2025-05-13 20:22:43 | INFO  | Task d3bfdaec-43c8-4c5b-b1b6-10f7423dbcf7 is in state SUCCESS 2025-05-13 20:22:43.320368 | orchestrator | 2025-05-13 20:22:43.320418 | orchestrator | 2025-05-13 20:22:43.320431 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:22:43.320443 | orchestrator | 2025-05-13 20:22:43.320455 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:22:43.320466 | orchestrator | Tuesday 13 May 2025 20:20:28 +0000 (0:00:00.257) 0:00:00.257 *********** 2025-05-13 20:22:43.320478 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:22:43.320489 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:22:43.320500 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:22:43.320511 | orchestrator | 2025-05-13 20:22:43.320522 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:22:43.320533 | orchestrator | Tuesday 13 May 2025 20:20:29 +0000 (0:00:00.303) 0:00:00.561 *********** 2025-05-13 20:22:43.320544 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-05-13 20:22:43.320556 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-05-13 20:22:43.320566 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-05-13 20:22:43.320577 | orchestrator | 2025-05-13 20:22:43.320588 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-05-13 20:22:43.320662 | orchestrator | 2025-05-13 20:22:43.320674 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-13 20:22:43.320685 | orchestrator | Tuesday 13 May 2025 20:20:29 +0000 (0:00:00.408) 0:00:00.969 *********** 2025-05-13 20:22:43.320697 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:22:43.320762 | orchestrator | 2025-05-13 20:22:43.320776 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-05-13 20:22:43.320804 | orchestrator | Tuesday 13 May 2025 20:20:30 +0000 (0:00:00.514) 0:00:01.484 *********** 2025-05-13 20:22:43.320819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 20:22:43.320863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 20:22:43.320875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 20:22:43.320887 | orchestrator | 2025-05-13 20:22:43.320898 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-05-13 20:22:43.320910 | orchestrator | Tuesday 13 May 2025 20:20:30 +0000 (0:00:00.702) 0:00:02.186 *********** 2025-05-13 20:22:43.321007 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-05-13 20:22:43.321022 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-05-13 20:22:43.321034 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 20:22:43.321046 | orchestrator | 2025-05-13 20:22:43.321058 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-13 20:22:43.321071 | orchestrator | Tuesday 13 May 2025 20:20:31 +0000 (0:00:00.820) 0:00:03.007 *********** 2025-05-13 20:22:43.321084 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:22:43.321097 | orchestrator | 2025-05-13 20:22:43.321109 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-05-13 20:22:43.321121 | orchestrator | Tuesday 13 May 2025 20:20:32 +0000 (0:00:00.688) 0:00:03.695 *********** 2025-05-13 20:22:43.321149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 20:22:43.321170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 20:22:43.321194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 20:22:43.321206 | orchestrator | 2025-05-13 20:22:43.321219 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-05-13 20:22:43.321231 | orchestrator | Tuesday 13 May 2025 20:20:33 +0000 (0:00:01.427) 0:00:05.122 *********** 2025-05-13 20:22:43.321244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-13 20:22:43.321258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-13 20:22:43.321271 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:22:43.321284 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:22:43.321302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-13 20:22:43.321314 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:22:43.321325 | orchestrator | 2025-05-13 20:22:43.321417 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-05-13 20:22:43.321430 | orchestrator | Tuesday 13 May 2025 20:20:34 +0000 (0:00:00.360) 0:00:05.483 *********** 2025-05-13 20:22:43.321442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-13 20:22:43.321462 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:22:43.321479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-13 20:22:43.321490 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:22:43.321501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-13 20:22:43.321512 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:22:43.321523 | orchestrator | 2025-05-13 20:22:43.321534 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-05-13 20:22:43.321544 | orchestrator | Tuesday 13 May 2025 20:20:34 +0000 (0:00:00.755) 0:00:06.238 *********** 2025-05-13 20:22:43.321556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 20:22:43.321568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 20:22:43.321587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 20:22:43.321607 | orchestrator | 2025-05-13 20:22:43.321617 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-05-13 20:22:43.321629 | orchestrator | Tuesday 13 May 2025 20:20:36 +0000 (0:00:01.265) 0:00:07.503 *********** 2025-05-13 20:22:43.321644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 20:22:43.321656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 20:22:43.321668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 20:22:43.321679 | orchestrator | 2025-05-13 20:22:43.321690 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-05-13 20:22:43.321701 | orchestrator | Tuesday 13 May 2025 20:20:37 +0000 (0:00:01.316) 0:00:08.820 *********** 2025-05-13 20:22:43.321712 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:22:43.321723 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:22:43.321734 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:22:43.321745 | orchestrator | 2025-05-13 20:22:43.321756 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-05-13 20:22:43.321766 | orchestrator | Tuesday 13 May 2025 20:20:37 +0000 (0:00:00.518) 0:00:09.339 *********** 2025-05-13 20:22:43.321777 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-13 20:22:43.321788 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-13 20:22:43.321799 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-13 20:22:43.321810 | orchestrator | 2025-05-13 20:22:43.321821 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-05-13 20:22:43.321831 | orchestrator | Tuesday 13 May 2025 20:20:39 +0000 (0:00:01.258) 0:00:10.597 *********** 2025-05-13 20:22:43.321842 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-13 20:22:43.321853 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-13 20:22:43.321864 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-13 20:22:43.321882 | orchestrator | 2025-05-13 20:22:43.321893 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-05-13 20:22:43.321903 | orchestrator | Tuesday 13 May 2025 20:20:40 +0000 (0:00:01.299) 0:00:11.897 *********** 2025-05-13 20:22:43.321921 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 20:22:43.321945 | orchestrator | 2025-05-13 20:22:43.321956 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-05-13 20:22:43.321976 | orchestrator | Tuesday 13 May 2025 20:20:41 +0000 (0:00:00.926) 0:00:12.823 *********** 2025-05-13 20:22:43.321987 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-05-13 20:22:43.321998 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-05-13 20:22:43.322009 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:22:43.322077 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:22:43.322089 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:22:43.322100 | orchestrator | 2025-05-13 20:22:43.322111 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-05-13 20:22:43.322122 | orchestrator | Tuesday 13 May 2025 20:20:42 +0000 (0:00:00.869) 0:00:13.692 *********** 2025-05-13 20:22:43.322133 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:22:43.322144 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:22:43.322155 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:22:43.322166 | orchestrator | 2025-05-13 20:22:43.322176 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-05-13 20:22:43.322187 | orchestrator | Tuesday 13 May 2025 20:20:42 +0000 (0:00:00.683) 0:00:14.376 *********** 2025-05-13 20:22:43.322205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1100127, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8353937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1100127, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8353937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1100127, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8353937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1100100, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8293936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1100100, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8293936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1100100, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8293936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1100076, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8263936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1100076, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8263936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1100076, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8263936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1100117, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8313937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1100117, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8313937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1100117, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8313937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1100046, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8223937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1100046, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8223937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1100046, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8223937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1100084, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8273938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1100084, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8273938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1100084, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8273938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1100110, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8313937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1100110, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8313937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1100110, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8313937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1100045, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8213935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1100045, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8213935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1100045, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8213935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1100020, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8153934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1100020, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8153934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1100020, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8153934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.322977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1100055, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8233936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1100055, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8233936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1100055, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8233936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1100032, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8193936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1100032, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8193936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1100032, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8193936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1100106, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8303938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1100106, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8303938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1100106, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8303938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1100060, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8243937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1100060, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8243937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1100060, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8243937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1100121, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8323936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1100121, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8323936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1100041, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8213935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1100041, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8213935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1100121, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8323936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1100089, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8273938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1100089, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8273938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1100041, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8213935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1100023, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8183935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1100023, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8183935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1100089, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8273938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1100036, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8203936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1100036, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8203936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1100023, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8183935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1100067, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8253937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1100067, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8253937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1100036, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8203936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1100311, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9083948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1100311, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9083948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1100067, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8253937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1100284, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8983946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1100284, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8983946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1100311, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9083948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1100136, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8363938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1100136, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8363938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1100284, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8983946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1100396, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.919395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1100396, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.919395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1100136, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8363938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1100170, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.841394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1100170, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.841394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1100396, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.919395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1100385, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9173949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1100385, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9173949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1100170, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.841394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1100403, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.922395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1100403, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.922395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1100385, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9173949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1100340, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9153948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1100340, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9153948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1100403, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.922395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1100380, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.916395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1100380, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.916395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1100340, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9153948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1100172, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8433938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.323987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1100172, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8433938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1100380, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.916395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1100298, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8993945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1100298, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8993945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1100172, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8433938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1100420, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.923395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1100420, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.923395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1100298, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8993945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1100390, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9183948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1100390, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9183948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1100420, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.923395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1100190, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.854394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1100190, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.854394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1100390, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9183948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1100175, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.852394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1100175, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.852394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1100190, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.854394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1100193, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8553941, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1100193, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8553941, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1100175, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.852394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1100195, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8953946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1100195, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8953946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1100193, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8553941, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1100304, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9003947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1100304, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9003947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1100195, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.8953946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1100374, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9153948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1100374, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9153948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1100306, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9003947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1100304, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9003947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1100306, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9003947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1100428, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.929395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1100374, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9153948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1100428, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.929395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1100306, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.9003947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1100428, 'dev': 169, 'nlink': 1, 'atime': 1747129592.0, 'mtime': 1747129592.0, 'ctime': 1747164063.929395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-13 20:22:43.324508 | orchestrator | 2025-05-13 20:22:43.324518 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-05-13 20:22:43.324529 | orchestrator | Tuesday 13 May 2025 20:21:20 +0000 (0:00:37.592) 0:00:51.969 *********** 2025-05-13 20:22:43.324539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 20:22:43.324548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 20:22:43.324558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-13 20:22:43.324640 | orchestrator | 2025-05-13 20:22:43.324649 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-05-13 20:22:43.324658 | orchestrator | Tuesday 13 May 2025 20:21:21 +0000 (0:00:01.005) 0:00:52.975 *********** 2025-05-13 20:22:43.324675 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:22:43.324684 | orchestrator | 2025-05-13 20:22:43.324693 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-05-13 20:22:43.324702 | orchestrator | Tuesday 13 May 2025 20:21:23 +0000 (0:00:02.137) 0:00:55.113 *********** 2025-05-13 20:22:43.324711 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:22:43.324720 | orchestrator | 2025-05-13 20:22:43.324728 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-13 20:22:43.324738 | orchestrator | Tuesday 13 May 2025 20:21:26 +0000 (0:00:02.488) 0:00:57.601 *********** 2025-05-13 20:22:43.324747 | orchestrator | 2025-05-13 20:22:43.324756 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-13 20:22:43.324831 | orchestrator | Tuesday 13 May 2025 20:21:26 +0000 (0:00:00.062) 0:00:57.664 *********** 2025-05-13 20:22:43.324844 | orchestrator | 2025-05-13 20:22:43.324853 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-13 20:22:43.324862 | orchestrator | Tuesday 13 May 2025 20:21:26 +0000 (0:00:00.065) 0:00:57.729 *********** 2025-05-13 20:22:43.324871 | orchestrator | 2025-05-13 20:22:43.324879 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-05-13 20:22:43.324888 | orchestrator | Tuesday 13 May 2025 20:21:26 +0000 (0:00:00.063) 0:00:57.792 *********** 2025-05-13 20:22:43.324896 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:22:43.324905 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:22:43.324914 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:22:43.324923 | orchestrator | 2025-05-13 20:22:43.324932 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-05-13 20:22:43.324941 | orchestrator | Tuesday 13 May 2025 20:21:28 +0000 (0:00:01.903) 0:00:59.695 *********** 2025-05-13 20:22:43.324949 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:22:43.324958 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:22:43.324967 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-05-13 20:22:43.324977 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-05-13 20:22:43.324987 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-05-13 20:22:43.324995 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:22:43.325004 | orchestrator | 2025-05-13 20:22:43.325021 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-05-13 20:22:43.325030 | orchestrator | Tuesday 13 May 2025 20:22:06 +0000 (0:00:37.878) 0:01:37.574 *********** 2025-05-13 20:22:43.325039 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:22:43.325048 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:22:43.325056 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:22:43.325065 | orchestrator | 2025-05-13 20:22:43.325074 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-05-13 20:22:43.325083 | orchestrator | Tuesday 13 May 2025 20:22:35 +0000 (0:00:29.486) 0:02:07.061 *********** 2025-05-13 20:22:43.325091 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:22:43.325100 | orchestrator | 2025-05-13 20:22:43.325108 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-05-13 20:22:43.325117 | orchestrator | Tuesday 13 May 2025 20:22:37 +0000 (0:00:02.326) 0:02:09.388 *********** 2025-05-13 20:22:43.325126 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:22:43.325134 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:22:43.325143 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:22:43.325151 | orchestrator | 2025-05-13 20:22:43.325160 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-05-13 20:22:43.325169 | orchestrator | Tuesday 13 May 2025 20:22:38 +0000 (0:00:00.327) 0:02:09.716 *********** 2025-05-13 20:22:43.325178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-05-13 20:22:43.325195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-05-13 20:22:43.325205 | orchestrator | 2025-05-13 20:22:43.325214 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-05-13 20:22:43.325223 | orchestrator | Tuesday 13 May 2025 20:22:40 +0000 (0:00:02.343) 0:02:12.059 *********** 2025-05-13 20:22:43.325232 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:22:43.325240 | orchestrator | 2025-05-13 20:22:43.325249 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:22:43.325258 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-13 20:22:43.325267 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-13 20:22:43.325276 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-13 20:22:43.325285 | orchestrator | 2025-05-13 20:22:43.325293 | orchestrator | 2025-05-13 20:22:43.325302 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:22:43.325311 | orchestrator | Tuesday 13 May 2025 20:22:40 +0000 (0:00:00.239) 0:02:12.298 *********** 2025-05-13 20:22:43.325320 | orchestrator | =============================================================================== 2025-05-13 20:22:43.325329 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 37.88s 2025-05-13 20:22:43.325400 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.59s 2025-05-13 20:22:43.325410 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 29.49s 2025-05-13 20:22:43.325418 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.49s 2025-05-13 20:22:43.325427 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.34s 2025-05-13 20:22:43.325444 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.33s 2025-05-13 20:22:43.325453 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.14s 2025-05-13 20:22:43.325461 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.90s 2025-05-13 20:22:43.325471 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.43s 2025-05-13 20:22:43.325479 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.32s 2025-05-13 20:22:43.325488 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.30s 2025-05-13 20:22:43.325497 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.27s 2025-05-13 20:22:43.325505 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.26s 2025-05-13 20:22:43.325514 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.01s 2025-05-13 20:22:43.325522 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.93s 2025-05-13 20:22:43.325531 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.87s 2025-05-13 20:22:43.325540 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.82s 2025-05-13 20:22:43.325548 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.76s 2025-05-13 20:22:43.325557 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.70s 2025-05-13 20:22:43.325571 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.69s 2025-05-13 20:22:43.325588 | orchestrator | 2025-05-13 20:22:43 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:22:43.325597 | orchestrator | 2025-05-13 20:22:43 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:22:46.357943 | orchestrator | 2025-05-13 20:22:46 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:22:46.359823 | orchestrator | 2025-05-13 20:22:46 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:22:46.359928 | orchestrator | 2025-05-13 20:22:46 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:22:49.418795 | orchestrator | 2025-05-13 20:22:49 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:22:49.420909 | orchestrator | 2025-05-13 20:22:49 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:22:49.420945 | orchestrator | 2025-05-13 20:22:49 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:22:52.472960 | orchestrator | 2025-05-13 20:22:52 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:22:52.474903 | orchestrator | 2025-05-13 20:22:52 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:22:52.475090 | orchestrator | 2025-05-13 20:22:52 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:22:55.528960 | orchestrator | 2025-05-13 20:22:55 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:22:55.531411 | orchestrator | 2025-05-13 20:22:55 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:22:55.531449 | orchestrator | 2025-05-13 20:22:55 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:22:58.585118 | orchestrator | 2025-05-13 20:22:58 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:22:58.587160 | orchestrator | 2025-05-13 20:22:58 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:22:58.587196 | orchestrator | 2025-05-13 20:22:58 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:23:01.640662 | orchestrator | 2025-05-13 20:23:01 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:23:01.642882 | orchestrator | 2025-05-13 20:23:01 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:23:01.643087 | orchestrator | 2025-05-13 20:23:01 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:23:04.700315 | orchestrator | 2025-05-13 20:23:04 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state STARTED 2025-05-13 20:23:04.701501 | orchestrator | 2025-05-13 20:23:04 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:23:04.701533 | orchestrator | 2025-05-13 20:23:04 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:23:07.756598 | orchestrator | 2025-05-13 20:23:07 | INFO  | Task e53e30de-4249-485e-827d-e510014f9680 is in state SUCCESS 2025-05-13 20:23:07.757691 | orchestrator | 2025-05-13 20:23:07.757739 | orchestrator | 2025-05-13 20:23:07.757758 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:23:07.757779 | orchestrator | 2025-05-13 20:23:07.757804 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-05-13 20:23:07.757831 | orchestrator | Tuesday 13 May 2025 20:13:43 +0000 (0:00:00.273) 0:00:00.273 *********** 2025-05-13 20:23:07.757850 | orchestrator | changed: [testbed-manager] 2025-05-13 20:23:07.757873 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:23:07.757891 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:23:07.759189 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:23:07.759353 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:23:07.759369 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:23:07.759381 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:23:07.759392 | orchestrator | 2025-05-13 20:23:07.759404 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:23:07.759415 | orchestrator | Tuesday 13 May 2025 20:13:44 +0000 (0:00:00.820) 0:00:01.093 *********** 2025-05-13 20:23:07.759426 | orchestrator | changed: [testbed-manager] 2025-05-13 20:23:07.759438 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:23:07.759449 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:23:07.759459 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:23:07.759470 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:23:07.759481 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:23:07.759492 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:23:07.759535 | orchestrator | 2025-05-13 20:23:07.759546 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:23:07.759557 | orchestrator | Tuesday 13 May 2025 20:13:45 +0000 (0:00:00.626) 0:00:01.719 *********** 2025-05-13 20:23:07.759568 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-05-13 20:23:07.759579 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-05-13 20:23:07.759590 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-05-13 20:23:07.759601 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-05-13 20:23:07.759625 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-05-13 20:23:07.759636 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-05-13 20:23:07.759647 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-05-13 20:23:07.759658 | orchestrator | 2025-05-13 20:23:07.759669 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-05-13 20:23:07.759679 | orchestrator | 2025-05-13 20:23:07.759690 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-13 20:23:07.759701 | orchestrator | Tuesday 13 May 2025 20:13:45 +0000 (0:00:00.945) 0:00:02.664 *********** 2025-05-13 20:23:07.759711 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:23:07.759722 | orchestrator | 2025-05-13 20:23:07.759733 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-05-13 20:23:07.759746 | orchestrator | Tuesday 13 May 2025 20:13:47 +0000 (0:00:01.242) 0:00:03.906 *********** 2025-05-13 20:23:07.759759 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-05-13 20:23:07.759771 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-05-13 20:23:07.759784 | orchestrator | 2025-05-13 20:23:07.759796 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-05-13 20:23:07.759808 | orchestrator | Tuesday 13 May 2025 20:13:51 +0000 (0:00:04.262) 0:00:08.169 *********** 2025-05-13 20:23:07.759821 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-13 20:23:07.759833 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-13 20:23:07.759845 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:23:07.759859 | orchestrator | 2025-05-13 20:23:07.759871 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-13 20:23:07.759881 | orchestrator | Tuesday 13 May 2025 20:13:55 +0000 (0:00:04.201) 0:00:12.371 *********** 2025-05-13 20:23:07.759892 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:23:07.759903 | orchestrator | 2025-05-13 20:23:07.759914 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-05-13 20:23:07.759924 | orchestrator | Tuesday 13 May 2025 20:13:56 +0000 (0:00:00.873) 0:00:13.245 *********** 2025-05-13 20:23:07.759935 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:23:07.759945 | orchestrator | 2025-05-13 20:23:07.759957 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-05-13 20:23:07.759967 | orchestrator | Tuesday 13 May 2025 20:13:58 +0000 (0:00:01.742) 0:00:14.987 *********** 2025-05-13 20:23:07.759988 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:23:07.759999 | orchestrator | 2025-05-13 20:23:07.760010 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-13 20:23:07.760021 | orchestrator | Tuesday 13 May 2025 20:14:02 +0000 (0:00:04.366) 0:00:19.354 *********** 2025-05-13 20:23:07.760031 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.760042 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.760053 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.760064 | orchestrator | 2025-05-13 20:23:07.760074 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-13 20:23:07.760084 | orchestrator | Tuesday 13 May 2025 20:14:03 +0000 (0:00:00.929) 0:00:20.284 *********** 2025-05-13 20:23:07.760095 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:23:07.760106 | orchestrator | 2025-05-13 20:23:07.760117 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-05-13 20:23:07.760127 | orchestrator | Tuesday 13 May 2025 20:14:35 +0000 (0:00:31.587) 0:00:51.871 *********** 2025-05-13 20:23:07.760138 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:23:07.760149 | orchestrator | 2025-05-13 20:23:07.760159 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-13 20:23:07.760170 | orchestrator | Tuesday 13 May 2025 20:14:51 +0000 (0:00:16.303) 0:01:08.174 *********** 2025-05-13 20:23:07.760180 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:23:07.760191 | orchestrator | 2025-05-13 20:23:07.760202 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-13 20:23:07.760213 | orchestrator | Tuesday 13 May 2025 20:15:02 +0000 (0:00:10.926) 0:01:19.101 *********** 2025-05-13 20:23:07.760305 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:23:07.760319 | orchestrator | 2025-05-13 20:23:07.760330 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-05-13 20:23:07.760341 | orchestrator | Tuesday 13 May 2025 20:15:04 +0000 (0:00:01.621) 0:01:20.722 *********** 2025-05-13 20:23:07.760352 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.760362 | orchestrator | 2025-05-13 20:23:07.760373 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-13 20:23:07.760384 | orchestrator | Tuesday 13 May 2025 20:15:04 +0000 (0:00:00.486) 0:01:21.209 *********** 2025-05-13 20:23:07.760396 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:23:07.760407 | orchestrator | 2025-05-13 20:23:07.760418 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-13 20:23:07.760428 | orchestrator | Tuesday 13 May 2025 20:15:05 +0000 (0:00:00.526) 0:01:21.735 *********** 2025-05-13 20:23:07.760439 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:23:07.760450 | orchestrator | 2025-05-13 20:23:07.760461 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-13 20:23:07.760470 | orchestrator | Tuesday 13 May 2025 20:15:22 +0000 (0:00:17.666) 0:01:39.401 *********** 2025-05-13 20:23:07.760479 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.760489 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.760499 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.760508 | orchestrator | 2025-05-13 20:23:07.760517 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-05-13 20:23:07.760527 | orchestrator | 2025-05-13 20:23:07.760537 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-13 20:23:07.760546 | orchestrator | Tuesday 13 May 2025 20:15:23 +0000 (0:00:00.318) 0:01:39.720 *********** 2025-05-13 20:23:07.760556 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:23:07.760565 | orchestrator | 2025-05-13 20:23:07.760575 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-05-13 20:23:07.760590 | orchestrator | Tuesday 13 May 2025 20:15:23 +0000 (0:00:00.636) 0:01:40.356 *********** 2025-05-13 20:23:07.760600 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.760617 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.760627 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:23:07.760636 | orchestrator | 2025-05-13 20:23:07.760646 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-05-13 20:23:07.760656 | orchestrator | Tuesday 13 May 2025 20:15:25 +0000 (0:00:02.046) 0:01:42.403 *********** 2025-05-13 20:23:07.760665 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.760675 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.760684 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:23:07.760694 | orchestrator | 2025-05-13 20:23:07.760703 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-13 20:23:07.760713 | orchestrator | Tuesday 13 May 2025 20:15:27 +0000 (0:00:02.116) 0:01:44.519 *********** 2025-05-13 20:23:07.760722 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.760732 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.760741 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.760751 | orchestrator | 2025-05-13 20:23:07.760761 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-13 20:23:07.760770 | orchestrator | Tuesday 13 May 2025 20:15:28 +0000 (0:00:00.342) 0:01:44.862 *********** 2025-05-13 20:23:07.760780 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-13 20:23:07.760789 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.760798 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-13 20:23:07.760808 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.760818 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-13 20:23:07.760828 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-05-13 20:23:07.760837 | orchestrator | 2025-05-13 20:23:07.760847 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-13 20:23:07.760856 | orchestrator | Tuesday 13 May 2025 20:15:36 +0000 (0:00:08.124) 0:01:52.986 *********** 2025-05-13 20:23:07.760866 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.760875 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.760885 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.760894 | orchestrator | 2025-05-13 20:23:07.760904 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-13 20:23:07.760913 | orchestrator | Tuesday 13 May 2025 20:15:36 +0000 (0:00:00.444) 0:01:53.431 *********** 2025-05-13 20:23:07.760923 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-13 20:23:07.760933 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.760942 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-13 20:23:07.760952 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.760961 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-13 20:23:07.760971 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.760980 | orchestrator | 2025-05-13 20:23:07.760990 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-13 20:23:07.760999 | orchestrator | Tuesday 13 May 2025 20:15:37 +0000 (0:00:00.789) 0:01:54.220 *********** 2025-05-13 20:23:07.761009 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.761018 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:23:07.761028 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.761037 | orchestrator | 2025-05-13 20:23:07.761047 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-05-13 20:23:07.761057 | orchestrator | Tuesday 13 May 2025 20:15:38 +0000 (0:00:00.661) 0:01:54.881 *********** 2025-05-13 20:23:07.761066 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.761076 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.761085 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:23:07.761094 | orchestrator | 2025-05-13 20:23:07.761104 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-05-13 20:23:07.761114 | orchestrator | Tuesday 13 May 2025 20:15:39 +0000 (0:00:01.128) 0:01:56.010 *********** 2025-05-13 20:23:07.761123 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.761139 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.761158 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:23:07.761168 | orchestrator | 2025-05-13 20:23:07.761180 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-05-13 20:23:07.761196 | orchestrator | Tuesday 13 May 2025 20:15:41 +0000 (0:00:02.423) 0:01:58.434 *********** 2025-05-13 20:23:07.761212 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.761250 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.761267 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:23:07.761283 | orchestrator | 2025-05-13 20:23:07.761299 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-13 20:23:07.761315 | orchestrator | Tuesday 13 May 2025 20:16:04 +0000 (0:00:22.406) 0:02:20.840 *********** 2025-05-13 20:23:07.761332 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.761346 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.761356 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:23:07.761366 | orchestrator | 2025-05-13 20:23:07.761375 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-13 20:23:07.761385 | orchestrator | Tuesday 13 May 2025 20:16:17 +0000 (0:00:13.052) 0:02:33.893 *********** 2025-05-13 20:23:07.761394 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.761404 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:23:07.761413 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.761423 | orchestrator | 2025-05-13 20:23:07.761432 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-05-13 20:23:07.761442 | orchestrator | Tuesday 13 May 2025 20:16:18 +0000 (0:00:00.910) 0:02:34.804 *********** 2025-05-13 20:23:07.761451 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.761461 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.761470 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:23:07.761480 | orchestrator | 2025-05-13 20:23:07.761490 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-05-13 20:23:07.761499 | orchestrator | Tuesday 13 May 2025 20:16:29 +0000 (0:00:10.909) 0:02:45.713 *********** 2025-05-13 20:23:07.761508 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.761518 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.761533 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.761542 | orchestrator | 2025-05-13 20:23:07.761552 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-13 20:23:07.761562 | orchestrator | Tuesday 13 May 2025 20:16:30 +0000 (0:00:01.602) 0:02:47.316 *********** 2025-05-13 20:23:07.761571 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.761580 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.761590 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.761600 | orchestrator | 2025-05-13 20:23:07.761609 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-05-13 20:23:07.761618 | orchestrator | 2025-05-13 20:23:07.761628 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-13 20:23:07.761637 | orchestrator | Tuesday 13 May 2025 20:16:30 +0000 (0:00:00.332) 0:02:47.648 *********** 2025-05-13 20:23:07.761647 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:23:07.761657 | orchestrator | 2025-05-13 20:23:07.761667 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-05-13 20:23:07.761676 | orchestrator | Tuesday 13 May 2025 20:16:32 +0000 (0:00:01.253) 0:02:48.902 *********** 2025-05-13 20:23:07.761685 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-05-13 20:23:07.761695 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-05-13 20:23:07.761704 | orchestrator | 2025-05-13 20:23:07.761714 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-05-13 20:23:07.761723 | orchestrator | Tuesday 13 May 2025 20:16:35 +0000 (0:00:03.101) 0:02:52.003 *********** 2025-05-13 20:23:07.761733 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-05-13 20:23:07.761750 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-05-13 20:23:07.761760 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-05-13 20:23:07.761770 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-05-13 20:23:07.761780 | orchestrator | 2025-05-13 20:23:07.761789 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-05-13 20:23:07.761798 | orchestrator | Tuesday 13 May 2025 20:16:41 +0000 (0:00:06.463) 0:02:58.467 *********** 2025-05-13 20:23:07.761808 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-13 20:23:07.761817 | orchestrator | 2025-05-13 20:23:07.761827 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-05-13 20:23:07.761836 | orchestrator | Tuesday 13 May 2025 20:16:44 +0000 (0:00:02.950) 0:03:01.417 *********** 2025-05-13 20:23:07.761846 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-13 20:23:07.761856 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-05-13 20:23:07.761865 | orchestrator | 2025-05-13 20:23:07.761874 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-05-13 20:23:07.761884 | orchestrator | Tuesday 13 May 2025 20:16:48 +0000 (0:00:03.730) 0:03:05.148 *********** 2025-05-13 20:23:07.761893 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-13 20:23:07.761903 | orchestrator | 2025-05-13 20:23:07.761912 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-05-13 20:23:07.761922 | orchestrator | Tuesday 13 May 2025 20:16:51 +0000 (0:00:03.393) 0:03:08.541 *********** 2025-05-13 20:23:07.761931 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-05-13 20:23:07.761940 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-05-13 20:23:07.761950 | orchestrator | 2025-05-13 20:23:07.761959 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-13 20:23:07.761976 | orchestrator | Tuesday 13 May 2025 20:16:59 +0000 (0:00:07.812) 0:03:16.354 *********** 2025-05-13 20:23:07.761991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 20:23:07.762012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.762079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 20:23:07.762092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.762112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 20:23:07.762128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.762139 | orchestrator | 2025-05-13 20:23:07.762148 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-05-13 20:23:07.762158 | orchestrator | Tuesday 13 May 2025 20:17:02 +0000 (0:00:02.548) 0:03:18.903 *********** 2025-05-13 20:23:07.762173 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.762183 | orchestrator | 2025-05-13 20:23:07.762192 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-05-13 20:23:07.762202 | orchestrator | Tuesday 13 May 2025 20:17:02 +0000 (0:00:00.240) 0:03:19.144 *********** 2025-05-13 20:23:07.762211 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.762221 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.762247 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.762257 | orchestrator | 2025-05-13 20:23:07.762266 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-05-13 20:23:07.762276 | orchestrator | Tuesday 13 May 2025 20:17:03 +0000 (0:00:00.833) 0:03:19.978 *********** 2025-05-13 20:23:07.762285 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-13 20:23:07.762295 | orchestrator | 2025-05-13 20:23:07.762304 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-05-13 20:23:07.762314 | orchestrator | Tuesday 13 May 2025 20:17:05 +0000 (0:00:01.745) 0:03:21.724 *********** 2025-05-13 20:23:07.762323 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.762333 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.762342 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.762352 | orchestrator | 2025-05-13 20:23:07.762361 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-13 20:23:07.762370 | orchestrator | Tuesday 13 May 2025 20:17:05 +0000 (0:00:00.291) 0:03:22.015 *********** 2025-05-13 20:23:07.762380 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:23:07.762389 | orchestrator | 2025-05-13 20:23:07.762399 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-13 20:23:07.762408 | orchestrator | Tuesday 13 May 2025 20:17:06 +0000 (0:00:00.842) 0:03:22.858 *********** 2025-05-13 20:23:07.762418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 20:23:07.762438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 20:23:07.762461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 20:23:07.762473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.762484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.762501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.762511 | orchestrator | 2025-05-13 20:23:07.762521 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-13 20:23:07.762531 | orchestrator | Tuesday 13 May 2025 20:17:09 +0000 (0:00:03.548) 0:03:26.406 *********** 2025-05-13 20:23:07.762552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-13 20:23:07.762569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.762579 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.762590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-13 20:23:07.762601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.762611 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.762628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-13 20:23:07.762650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.762661 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.762670 | orchestrator | 2025-05-13 20:23:07.762680 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-13 20:23:07.762690 | orchestrator | Tuesday 13 May 2025 20:17:11 +0000 (0:00:02.059) 0:03:28.466 *********** 2025-05-13 20:23:07.762700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-13 20:23:07.762711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.762741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-13 20:23:07.762758 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.762773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.762783 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.762794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-13 20:23:07.762805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.762815 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.762825 | orchestrator | 2025-05-13 20:23:07.762835 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-05-13 20:23:07.762844 | orchestrator | Tuesday 13 May 2025 20:17:13 +0000 (0:00:01.752) 0:03:30.218 *********** 2025-05-13 20:23:07.762861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 20:23:07.762884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 20:23:07.762896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 20:23:07.762907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.762922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.762941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.762951 | orchestrator | 2025-05-13 20:23:07.762961 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-05-13 20:23:07.762971 | orchestrator | Tuesday 13 May 2025 20:17:16 +0000 (0:00:03.003) 0:03:33.222 *********** 2025-05-13 20:23:07.762985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 20:23:07.762997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 20:23:07.763014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 20:23:07.763032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.763047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.763059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.763068 | orchestrator | 2025-05-13 20:23:07.763101 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-05-13 20:23:07.763111 | orchestrator | Tuesday 13 May 2025 20:17:25 +0000 (0:00:09.442) 0:03:42.664 *********** 2025-05-13 20:23:07.763122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-13 20:23:07.763145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.763156 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.763166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-13 20:23:07.763182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.763192 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.763203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-13 20:23:07.763214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.763281 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.763293 | orchestrator | 2025-05-13 20:23:07.763303 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-05-13 20:23:07.763313 | orchestrator | Tuesday 13 May 2025 20:17:26 +0000 (0:00:00.895) 0:03:43.559 *********** 2025-05-13 20:23:07.763322 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:23:07.763332 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:23:07.763341 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:23:07.763350 | orchestrator | 2025-05-13 20:23:07.763366 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-05-13 20:23:07.763376 | orchestrator | Tuesday 13 May 2025 20:17:29 +0000 (0:00:02.212) 0:03:45.772 *********** 2025-05-13 20:23:07.763386 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.763396 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.763405 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.763414 | orchestrator | 2025-05-13 20:23:07.763424 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-05-13 20:23:07.763434 | orchestrator | Tuesday 13 May 2025 20:17:29 +0000 (0:00:00.685) 0:03:46.458 *********** 2025-05-13 20:23:07.763453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 20:23:07.763465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 20:23:07.763489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-13 20:23:07.763501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.763512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.763527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.763538 | orchestrator | 2025-05-13 20:23:07.763547 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-13 20:23:07.763557 | orchestrator | Tuesday 13 May 2025 20:17:32 +0000 (0:00:02.278) 0:03:48.736 *********** 2025-05-13 20:23:07.763567 | orchestrator | 2025-05-13 20:23:07.763576 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-13 20:23:07.763586 | orchestrator | Tuesday 13 May 2025 20:17:32 +0000 (0:00:00.124) 0:03:48.861 *********** 2025-05-13 20:23:07.763595 | orchestrator | 2025-05-13 20:23:07.763605 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-13 20:23:07.763614 | orchestrator | Tuesday 13 May 2025 20:17:32 +0000 (0:00:00.185) 0:03:49.047 *********** 2025-05-13 20:23:07.763624 | orchestrator | 2025-05-13 20:23:07.763633 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-05-13 20:23:07.763650 | orchestrator | Tuesday 13 May 2025 20:17:32 +0000 (0:00:00.569) 0:03:49.616 *********** 2025-05-13 20:23:07.763660 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:23:07.763669 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:23:07.763679 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:23:07.763688 | orchestrator | 2025-05-13 20:23:07.763697 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-05-13 20:23:07.763707 | orchestrator | Tuesday 13 May 2025 20:17:54 +0000 (0:00:21.233) 0:04:10.850 *********** 2025-05-13 20:23:07.763717 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:23:07.763726 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:23:07.763735 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:23:07.763745 | orchestrator | 2025-05-13 20:23:07.763754 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-05-13 20:23:07.763764 | orchestrator | 2025-05-13 20:23:07.763773 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-13 20:23:07.763783 | orchestrator | Tuesday 13 May 2025 20:18:05 +0000 (0:00:11.806) 0:04:22.657 *********** 2025-05-13 20:23:07.763793 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:23:07.763802 | orchestrator | 2025-05-13 20:23:07.763812 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-13 20:23:07.763821 | orchestrator | Tuesday 13 May 2025 20:18:07 +0000 (0:00:01.313) 0:04:23.970 *********** 2025-05-13 20:23:07.763830 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:23:07.763838 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:23:07.763846 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:23:07.763854 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.763861 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.763869 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.763877 | orchestrator | 2025-05-13 20:23:07.763884 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-05-13 20:23:07.763892 | orchestrator | Tuesday 13 May 2025 20:18:08 +0000 (0:00:00.797) 0:04:24.768 *********** 2025-05-13 20:23:07.763900 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.763908 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.763916 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.763924 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:23:07.763932 | orchestrator | 2025-05-13 20:23:07.763939 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-13 20:23:07.763952 | orchestrator | Tuesday 13 May 2025 20:18:09 +0000 (0:00:01.049) 0:04:25.818 *********** 2025-05-13 20:23:07.763960 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-05-13 20:23:07.763968 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-05-13 20:23:07.763975 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-05-13 20:23:07.763983 | orchestrator | 2025-05-13 20:23:07.763991 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-13 20:23:07.763999 | orchestrator | Tuesday 13 May 2025 20:18:09 +0000 (0:00:00.704) 0:04:26.523 *********** 2025-05-13 20:23:07.764006 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-05-13 20:23:07.764014 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-05-13 20:23:07.764022 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-05-13 20:23:07.764030 | orchestrator | 2025-05-13 20:23:07.764037 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-13 20:23:07.764045 | orchestrator | Tuesday 13 May 2025 20:18:11 +0000 (0:00:01.186) 0:04:27.710 *********** 2025-05-13 20:23:07.764053 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-05-13 20:23:07.764061 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:23:07.764068 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-05-13 20:23:07.764083 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:23:07.764090 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-05-13 20:23:07.764098 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:23:07.764106 | orchestrator | 2025-05-13 20:23:07.764113 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-05-13 20:23:07.764121 | orchestrator | Tuesday 13 May 2025 20:18:11 +0000 (0:00:00.727) 0:04:28.437 *********** 2025-05-13 20:23:07.764129 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-13 20:23:07.764137 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-13 20:23:07.764145 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.764157 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-13 20:23:07.764166 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-13 20:23:07.764173 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.764181 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-13 20:23:07.764189 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-13 20:23:07.764196 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-13 20:23:07.764204 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.764212 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-13 20:23:07.764219 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-13 20:23:07.764241 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-13 20:23:07.764249 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-13 20:23:07.764257 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-13 20:23:07.764265 | orchestrator | 2025-05-13 20:23:07.764273 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-05-13 20:23:07.764280 | orchestrator | Tuesday 13 May 2025 20:18:13 +0000 (0:00:02.188) 0:04:30.626 *********** 2025-05-13 20:23:07.764288 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.764296 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.764304 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.764311 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:23:07.764319 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:23:07.764327 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:23:07.764334 | orchestrator | 2025-05-13 20:23:07.764342 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-05-13 20:23:07.764350 | orchestrator | Tuesday 13 May 2025 20:18:15 +0000 (0:00:01.830) 0:04:32.456 *********** 2025-05-13 20:23:07.764357 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.764365 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.764372 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.764380 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:23:07.764388 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:23:07.764395 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:23:07.764403 | orchestrator | 2025-05-13 20:23:07.764411 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-13 20:23:07.764418 | orchestrator | Tuesday 13 May 2025 20:18:17 +0000 (0:00:01.781) 0:04:34.238 *********** 2025-05-13 20:23:07.764427 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764447 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764461 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764470 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764479 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764487 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764507 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764534 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764584 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764631 | orchestrator | 2025-05-13 20:23:07.764638 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-13 20:23:07.764646 | orchestrator | Tuesday 13 May 2025 20:18:20 +0000 (0:00:02.659) 0:04:36.897 *********** 2025-05-13 20:23:07.764659 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:23:07.764669 | orchestrator | 2025-05-13 20:23:07.764676 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-13 20:23:07.764684 | orchestrator | Tuesday 13 May 2025 20:18:21 +0000 (0:00:01.334) 0:04:38.231 *********** 2025-05-13 20:23:07.764692 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764701 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764723 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764754 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764771 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764785 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764822 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764839 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764854 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.764862 | orchestrator | 2025-05-13 20:23:07.764870 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-13 20:23:07.764878 | orchestrator | Tuesday 13 May 2025 20:18:26 +0000 (0:00:04.669) 0:04:42.901 *********** 2025-05-13 20:23:07.764891 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-13 20:23:07.764900 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-13 20:23:07.764913 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.764921 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:23:07.764930 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-13 20:23:07.764944 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-13 20:23:07.764959 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.764968 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:23:07.764976 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-13 20:23:07.764989 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-13 20:23:07.764998 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.765012 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:23:07.765021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-13 20:23:07.765029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.765037 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.765051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-13 20:23:07.765060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.765068 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.765080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-13 20:23:07.765089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.765103 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.765111 | orchestrator | 2025-05-13 20:23:07.765118 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-13 20:23:07.765126 | orchestrator | Tuesday 13 May 2025 20:18:28 +0000 (0:00:02.452) 0:04:45.353 *********** 2025-05-13 20:23:07.765135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-13 20:23:07.765143 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-13 20:23:07.765274 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.765288 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:23:07.765297 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-13 20:23:07.765310 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-13 20:23:07.765328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.765337 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:23:07.765345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-13 20:23:07.765358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-13 20:23:07.765367 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.765375 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:23:07.765387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-13 20:23:07.765401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.765410 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.765418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-13 20:23:07.765426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.765434 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.765443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-13 20:23:07.765455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.765464 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.765472 | orchestrator | 2025-05-13 20:23:07.765480 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-13 20:23:07.765488 | orchestrator | Tuesday 13 May 2025 20:18:31 +0000 (0:00:03.115) 0:04:48.469 *********** 2025-05-13 20:23:07.765496 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.765504 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.765512 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.765520 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-13 20:23:07.765528 | orchestrator | 2025-05-13 20:23:07.765536 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-05-13 20:23:07.765549 | orchestrator | Tuesday 13 May 2025 20:18:32 +0000 (0:00:00.922) 0:04:49.391 *********** 2025-05-13 20:23:07.765557 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-13 20:23:07.765565 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-13 20:23:07.765573 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-13 20:23:07.765581 | orchestrator | 2025-05-13 20:23:07.765589 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-05-13 20:23:07.765600 | orchestrator | Tuesday 13 May 2025 20:18:34 +0000 (0:00:01.513) 0:04:50.905 *********** 2025-05-13 20:23:07.765609 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-13 20:23:07.765616 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-13 20:23:07.765624 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-13 20:23:07.765632 | orchestrator | 2025-05-13 20:23:07.765640 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-05-13 20:23:07.765648 | orchestrator | Tuesday 13 May 2025 20:18:35 +0000 (0:00:01.159) 0:04:52.065 *********** 2025-05-13 20:23:07.765656 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:23:07.765664 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:23:07.765672 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:23:07.765680 | orchestrator | 2025-05-13 20:23:07.765688 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-05-13 20:23:07.765695 | orchestrator | Tuesday 13 May 2025 20:18:36 +0000 (0:00:00.737) 0:04:52.802 *********** 2025-05-13 20:23:07.765703 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:23:07.765711 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:23:07.765719 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:23:07.765726 | orchestrator | 2025-05-13 20:23:07.765734 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-05-13 20:23:07.765742 | orchestrator | Tuesday 13 May 2025 20:18:36 +0000 (0:00:00.524) 0:04:53.326 *********** 2025-05-13 20:23:07.765750 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-13 20:23:07.765758 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-13 20:23:07.765765 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-13 20:23:07.765773 | orchestrator | 2025-05-13 20:23:07.765781 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-05-13 20:23:07.765788 | orchestrator | Tuesday 13 May 2025 20:18:37 +0000 (0:00:01.301) 0:04:54.628 *********** 2025-05-13 20:23:07.765796 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-13 20:23:07.765804 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-13 20:23:07.765812 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-13 20:23:07.765818 | orchestrator | 2025-05-13 20:23:07.765825 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-05-13 20:23:07.765831 | orchestrator | Tuesday 13 May 2025 20:18:39 +0000 (0:00:01.166) 0:04:55.795 *********** 2025-05-13 20:23:07.765838 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-13 20:23:07.765845 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-13 20:23:07.765851 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-13 20:23:07.765858 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-05-13 20:23:07.765864 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-05-13 20:23:07.765871 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-05-13 20:23:07.765878 | orchestrator | 2025-05-13 20:23:07.765884 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-05-13 20:23:07.765891 | orchestrator | Tuesday 13 May 2025 20:18:43 +0000 (0:00:04.677) 0:05:00.472 *********** 2025-05-13 20:23:07.765898 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:23:07.765904 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:23:07.765911 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:23:07.765918 | orchestrator | 2025-05-13 20:23:07.765924 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-05-13 20:23:07.765936 | orchestrator | Tuesday 13 May 2025 20:18:44 +0000 (0:00:00.985) 0:05:01.458 *********** 2025-05-13 20:23:07.765943 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:23:07.765950 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:23:07.765956 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:23:07.765963 | orchestrator | 2025-05-13 20:23:07.765969 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-05-13 20:23:07.765976 | orchestrator | Tuesday 13 May 2025 20:18:45 +0000 (0:00:00.609) 0:05:02.067 *********** 2025-05-13 20:23:07.765983 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:23:07.765989 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:23:07.765996 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:23:07.766002 | orchestrator | 2025-05-13 20:23:07.766075 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-05-13 20:23:07.766086 | orchestrator | Tuesday 13 May 2025 20:18:48 +0000 (0:00:02.889) 0:05:04.956 *********** 2025-05-13 20:23:07.766093 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-13 20:23:07.766101 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-13 20:23:07.766107 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-13 20:23:07.766114 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-13 20:23:07.766121 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-13 20:23:07.766128 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-13 20:23:07.766134 | orchestrator | 2025-05-13 20:23:07.766141 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-05-13 20:23:07.766148 | orchestrator | Tuesday 13 May 2025 20:18:52 +0000 (0:00:03.773) 0:05:08.730 *********** 2025-05-13 20:23:07.766154 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-13 20:23:07.766161 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-13 20:23:07.766168 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-13 20:23:07.766174 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-13 20:23:07.766188 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:23:07.766195 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-13 20:23:07.766201 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:23:07.766207 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-13 20:23:07.766214 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:23:07.766220 | orchestrator | 2025-05-13 20:23:07.766243 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-05-13 20:23:07.766251 | orchestrator | Tuesday 13 May 2025 20:18:55 +0000 (0:00:03.792) 0:05:12.523 *********** 2025-05-13 20:23:07.766257 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:23:07.766264 | orchestrator | 2025-05-13 20:23:07.766270 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-05-13 20:23:07.766277 | orchestrator | Tuesday 13 May 2025 20:18:55 +0000 (0:00:00.096) 0:05:12.619 *********** 2025-05-13 20:23:07.766284 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:23:07.766290 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:23:07.766297 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:23:07.766303 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.766310 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.766316 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.766323 | orchestrator | 2025-05-13 20:23:07.766329 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-05-13 20:23:07.766342 | orchestrator | Tuesday 13 May 2025 20:18:56 +0000 (0:00:00.867) 0:05:13.486 *********** 2025-05-13 20:23:07.766348 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-13 20:23:07.766355 | orchestrator | 2025-05-13 20:23:07.766362 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-05-13 20:23:07.766368 | orchestrator | Tuesday 13 May 2025 20:18:57 +0000 (0:00:00.739) 0:05:14.225 *********** 2025-05-13 20:23:07.766375 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:23:07.766382 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:23:07.766388 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:23:07.766394 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.766401 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.766407 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.766414 | orchestrator | 2025-05-13 20:23:07.766420 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-05-13 20:23:07.766427 | orchestrator | Tuesday 13 May 2025 20:18:58 +0000 (0:00:00.595) 0:05:14.821 *********** 2025-05-13 20:23:07.766434 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-13 20:23:07.766447 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-13 20:23:07.766455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 20:23:07.766466 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-13 20:23:07.766479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 20:23:07.766487 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-13 20:23:07.766494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 20:23:07.766505 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-13 20:23:07.766513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.766523 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-13 20:23:07.766530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.766542 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.766549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.766559 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.766567 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.766574 | orchestrator | 2025-05-13 20:23:07.766581 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-05-13 20:23:07.766587 | orchestrator | Tuesday 13 May 2025 20:19:02 +0000 (0:00:04.649) 0:05:19.470 *********** 2025-05-13 20:23:07.766598 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-13 20:23:07.766610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-13 20:23:07.766618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-13 20:23:07.766625 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-13 20:23:07.766636 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-13 20:23:07.766644 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-13 20:23:07.766659 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.766666 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.766673 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.766686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 20:23:07.766693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 20:23:07.766704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 20:23:07.766716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.766723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.766730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.766737 | orchestrator | 2025-05-13 20:23:07.766743 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-05-13 20:23:07.766750 | orchestrator | Tuesday 13 May 2025 20:19:09 +0000 (0:00:06.299) 0:05:25.770 *********** 2025-05-13 20:23:07.766757 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:23:07.766764 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:23:07.766770 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:23:07.766777 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.766783 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.766790 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.766796 | orchestrator | 2025-05-13 20:23:07.766803 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-05-13 20:23:07.766809 | orchestrator | Tuesday 13 May 2025 20:19:10 +0000 (0:00:01.646) 0:05:27.416 *********** 2025-05-13 20:23:07.766816 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-13 20:23:07.766822 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-13 20:23:07.766829 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-13 20:23:07.766836 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-13 20:23:07.766846 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-13 20:23:07.766853 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-13 20:23:07.766859 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-13 20:23:07.766866 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.766877 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-13 20:23:07.766884 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.766891 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-13 20:23:07.766897 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.766904 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-13 20:23:07.766911 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-13 20:23:07.766917 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-13 20:23:07.766924 | orchestrator | 2025-05-13 20:23:07.766931 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-05-13 20:23:07.766937 | orchestrator | Tuesday 13 May 2025 20:19:17 +0000 (0:00:06.469) 0:05:33.885 *********** 2025-05-13 20:23:07.766944 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:23:07.766951 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:23:07.766957 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:23:07.766964 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.766970 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.766977 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.766983 | orchestrator | 2025-05-13 20:23:07.766990 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-05-13 20:23:07.767001 | orchestrator | Tuesday 13 May 2025 20:19:18 +0000 (0:00:00.960) 0:05:34.846 *********** 2025-05-13 20:23:07.767007 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-13 20:23:07.767014 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-13 20:23:07.767021 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-13 20:23:07.767028 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-13 20:23:07.767034 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-13 20:23:07.767041 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-13 20:23:07.767048 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-13 20:23:07.767054 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-13 20:23:07.767061 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-13 20:23:07.767067 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-13 20:23:07.767074 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-13 20:23:07.767081 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-13 20:23:07.767088 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.767094 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-13 20:23:07.767101 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.767108 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-13 20:23:07.767114 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-13 20:23:07.767121 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.767132 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-13 20:23:07.767139 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-13 20:23:07.767145 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-13 20:23:07.767152 | orchestrator | 2025-05-13 20:23:07.767158 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-05-13 20:23:07.767165 | orchestrator | Tuesday 13 May 2025 20:19:25 +0000 (0:00:07.405) 0:05:42.251 *********** 2025-05-13 20:23:07.767172 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-13 20:23:07.767178 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-13 20:23:07.767188 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-13 20:23:07.767195 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-13 20:23:07.767202 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-13 20:23:07.767208 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-13 20:23:07.767215 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-13 20:23:07.767221 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-13 20:23:07.767244 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-13 20:23:07.767251 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-13 20:23:07.767258 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-13 20:23:07.767265 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-13 20:23:07.767271 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-13 20:23:07.767278 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.767284 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-13 20:23:07.767291 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.767298 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-13 20:23:07.767304 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-13 20:23:07.767316 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-13 20:23:07.767323 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-13 20:23:07.767330 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.767336 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-13 20:23:07.767343 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-13 20:23:07.767350 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-13 20:23:07.767357 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-13 20:23:07.767364 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-13 20:23:07.767370 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-13 20:23:07.767377 | orchestrator | 2025-05-13 20:23:07.767383 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-05-13 20:23:07.767390 | orchestrator | Tuesday 13 May 2025 20:19:36 +0000 (0:00:10.584) 0:05:52.836 *********** 2025-05-13 20:23:07.767397 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:23:07.767408 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:23:07.767415 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:23:07.767422 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.767428 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.767435 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.767441 | orchestrator | 2025-05-13 20:23:07.767448 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-05-13 20:23:07.767454 | orchestrator | Tuesday 13 May 2025 20:19:36 +0000 (0:00:00.571) 0:05:53.408 *********** 2025-05-13 20:23:07.767461 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:23:07.767468 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:23:07.767474 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:23:07.767481 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.767487 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.767494 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.767500 | orchestrator | 2025-05-13 20:23:07.767507 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-05-13 20:23:07.767513 | orchestrator | Tuesday 13 May 2025 20:19:37 +0000 (0:00:00.823) 0:05:54.231 *********** 2025-05-13 20:23:07.767520 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.767527 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.767533 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:23:07.767540 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.767546 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:23:07.767553 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:23:07.767559 | orchestrator | 2025-05-13 20:23:07.767566 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-05-13 20:23:07.767572 | orchestrator | Tuesday 13 May 2025 20:19:39 +0000 (0:00:02.432) 0:05:56.664 *********** 2025-05-13 20:23:07.767584 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-13 20:23:07.767592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-13 20:23:07.767603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-13 20:23:07.767614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.767622 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:23:07.767629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-13 20:23:07.767636 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.767643 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:23:07.767654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-13 20:23:07.767662 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-13 20:23:07.767677 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.767684 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:23:07.767691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-13 20:23:07.767698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.767705 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.767712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-13 20:23:07.767723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.767730 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.767737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-13 20:23:07.767752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-13 20:23:07.767759 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.767766 | orchestrator | 2025-05-13 20:23:07.767773 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-05-13 20:23:07.767779 | orchestrator | Tuesday 13 May 2025 20:19:43 +0000 (0:00:03.201) 0:05:59.865 *********** 2025-05-13 20:23:07.767786 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-13 20:23:07.767793 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-13 20:23:07.767799 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:23:07.767806 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-13 20:23:07.767812 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-13 20:23:07.767819 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:23:07.767826 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-13 20:23:07.767832 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-13 20:23:07.767839 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:23:07.767846 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-13 20:23:07.767852 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-13 20:23:07.767859 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.767866 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-13 20:23:07.767872 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-13 20:23:07.767879 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.767885 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-13 20:23:07.767892 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-13 20:23:07.767898 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.767905 | orchestrator | 2025-05-13 20:23:07.767912 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-05-13 20:23:07.767919 | orchestrator | Tuesday 13 May 2025 20:19:43 +0000 (0:00:00.510) 0:06:00.375 *********** 2025-05-13 20:23:07.767926 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-13 20:23:07.767937 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-13 20:23:07.767950 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-13 20:23:07.767961 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-13 20:23:07.767969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 20:23:07.767975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 20:23:07.767983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-13 20:23:07.767993 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-13 20:23:07.768006 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.768016 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-13 20:23:07.768024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.768031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.768038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.768048 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.768120 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-13 20:23:07.768127 | orchestrator | 2025-05-13 20:23:07.768134 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-13 20:23:07.768140 | orchestrator | Tuesday 13 May 2025 20:19:47 +0000 (0:00:03.637) 0:06:04.013 *********** 2025-05-13 20:23:07.768147 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:23:07.768154 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:23:07.768160 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:23:07.768167 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.768174 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.768180 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.768187 | orchestrator | 2025-05-13 20:23:07.768198 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-13 20:23:07.768205 | orchestrator | Tuesday 13 May 2025 20:19:48 +0000 (0:00:00.835) 0:06:04.849 *********** 2025-05-13 20:23:07.768211 | orchestrator | 2025-05-13 20:23:07.768218 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-13 20:23:07.768225 | orchestrator | Tuesday 13 May 2025 20:19:48 +0000 (0:00:00.357) 0:06:05.206 *********** 2025-05-13 20:23:07.768269 | orchestrator | 2025-05-13 20:23:07.768276 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-13 20:23:07.768283 | orchestrator | Tuesday 13 May 2025 20:19:48 +0000 (0:00:00.142) 0:06:05.349 *********** 2025-05-13 20:23:07.768290 | orchestrator | 2025-05-13 20:23:07.768296 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-13 20:23:07.768303 | orchestrator | Tuesday 13 May 2025 20:19:48 +0000 (0:00:00.270) 0:06:05.620 *********** 2025-05-13 20:23:07.768309 | orchestrator | 2025-05-13 20:23:07.768316 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-13 20:23:07.768323 | orchestrator | Tuesday 13 May 2025 20:19:49 +0000 (0:00:00.164) 0:06:05.785 *********** 2025-05-13 20:23:07.768329 | orchestrator | 2025-05-13 20:23:07.768336 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-13 20:23:07.768343 | orchestrator | Tuesday 13 May 2025 20:19:49 +0000 (0:00:00.134) 0:06:05.919 *********** 2025-05-13 20:23:07.768349 | orchestrator | 2025-05-13 20:23:07.768356 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-05-13 20:23:07.768362 | orchestrator | Tuesday 13 May 2025 20:19:49 +0000 (0:00:00.131) 0:06:06.050 *********** 2025-05-13 20:23:07.768369 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:23:07.768376 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:23:07.768382 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:23:07.768389 | orchestrator | 2025-05-13 20:23:07.768395 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-05-13 20:23:07.768402 | orchestrator | Tuesday 13 May 2025 20:20:00 +0000 (0:00:11.501) 0:06:17.551 *********** 2025-05-13 20:23:07.768409 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:23:07.768420 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:23:07.768427 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:23:07.768434 | orchestrator | 2025-05-13 20:23:07.768440 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-05-13 20:23:07.768447 | orchestrator | Tuesday 13 May 2025 20:20:18 +0000 (0:00:17.336) 0:06:34.887 *********** 2025-05-13 20:23:07.768454 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:23:07.768460 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:23:07.768467 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:23:07.768473 | orchestrator | 2025-05-13 20:23:07.768480 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-05-13 20:23:07.768486 | orchestrator | Tuesday 13 May 2025 20:20:40 +0000 (0:00:22.449) 0:06:57.337 *********** 2025-05-13 20:23:07.768493 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:23:07.768499 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:23:07.768506 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:23:07.768513 | orchestrator | 2025-05-13 20:23:07.768519 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-05-13 20:23:07.768526 | orchestrator | Tuesday 13 May 2025 20:21:28 +0000 (0:00:48.180) 0:07:45.518 *********** 2025-05-13 20:23:07.768532 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:23:07.768538 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:23:07.768544 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:23:07.768550 | orchestrator | 2025-05-13 20:23:07.768557 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-05-13 20:23:07.768563 | orchestrator | Tuesday 13 May 2025 20:21:29 +0000 (0:00:00.956) 0:07:46.474 *********** 2025-05-13 20:23:07.768569 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:23:07.768575 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:23:07.768582 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:23:07.768588 | orchestrator | 2025-05-13 20:23:07.768594 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-05-13 20:23:07.768604 | orchestrator | Tuesday 13 May 2025 20:21:30 +0000 (0:00:00.773) 0:07:47.248 *********** 2025-05-13 20:23:07.768611 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:23:07.768617 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:23:07.768623 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:23:07.768629 | orchestrator | 2025-05-13 20:23:07.768635 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-05-13 20:23:07.768641 | orchestrator | Tuesday 13 May 2025 20:21:55 +0000 (0:00:24.513) 0:08:11.761 *********** 2025-05-13 20:23:07.768648 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:23:07.768654 | orchestrator | 2025-05-13 20:23:07.768660 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-05-13 20:23:07.768666 | orchestrator | Tuesday 13 May 2025 20:21:55 +0000 (0:00:00.125) 0:08:11.887 *********** 2025-05-13 20:23:07.768672 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:23:07.768678 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.768684 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:23:07.768690 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.768697 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.768703 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-05-13 20:23:07.768709 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-13 20:23:07.768715 | orchestrator | 2025-05-13 20:23:07.768722 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-05-13 20:23:07.768728 | orchestrator | Tuesday 13 May 2025 20:22:18 +0000 (0:00:22.876) 0:08:34.764 *********** 2025-05-13 20:23:07.768734 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.768740 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:23:07.768746 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:23:07.768753 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.768765 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.768771 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:23:07.768777 | orchestrator | 2025-05-13 20:23:07.768787 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-05-13 20:23:07.768793 | orchestrator | Tuesday 13 May 2025 20:22:29 +0000 (0:00:11.251) 0:08:46.016 *********** 2025-05-13 20:23:07.768799 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.768805 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:23:07.768811 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.768818 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:23:07.768824 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.768830 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-05-13 20:23:07.768836 | orchestrator | 2025-05-13 20:23:07.768842 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-13 20:23:07.768848 | orchestrator | Tuesday 13 May 2025 20:22:33 +0000 (0:00:04.498) 0:08:50.514 *********** 2025-05-13 20:23:07.768854 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-13 20:23:07.768861 | orchestrator | 2025-05-13 20:23:07.768867 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-13 20:23:07.768873 | orchestrator | Tuesday 13 May 2025 20:22:45 +0000 (0:00:11.873) 0:09:02.387 *********** 2025-05-13 20:23:07.768879 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-13 20:23:07.768885 | orchestrator | 2025-05-13 20:23:07.768892 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-05-13 20:23:07.768898 | orchestrator | Tuesday 13 May 2025 20:22:46 +0000 (0:00:01.152) 0:09:03.540 *********** 2025-05-13 20:23:07.768904 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:23:07.768910 | orchestrator | 2025-05-13 20:23:07.768916 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-05-13 20:23:07.768922 | orchestrator | Tuesday 13 May 2025 20:22:48 +0000 (0:00:01.198) 0:09:04.738 *********** 2025-05-13 20:23:07.768929 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-13 20:23:07.768935 | orchestrator | 2025-05-13 20:23:07.768941 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-05-13 20:23:07.768947 | orchestrator | Tuesday 13 May 2025 20:22:58 +0000 (0:00:09.971) 0:09:14.709 *********** 2025-05-13 20:23:07.768953 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:23:07.768959 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:23:07.768966 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:23:07.768972 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:23:07.768978 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:23:07.768984 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:23:07.768990 | orchestrator | 2025-05-13 20:23:07.768996 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-05-13 20:23:07.769002 | orchestrator | 2025-05-13 20:23:07.769009 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-05-13 20:23:07.769015 | orchestrator | Tuesday 13 May 2025 20:22:59 +0000 (0:00:01.713) 0:09:16.423 *********** 2025-05-13 20:23:07.769021 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:23:07.769027 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:23:07.769034 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:23:07.769040 | orchestrator | 2025-05-13 20:23:07.769046 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-05-13 20:23:07.769052 | orchestrator | 2025-05-13 20:23:07.769058 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-05-13 20:23:07.769064 | orchestrator | Tuesday 13 May 2025 20:23:00 +0000 (0:00:01.157) 0:09:17.581 *********** 2025-05-13 20:23:07.769070 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.769077 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.769083 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.769089 | orchestrator | 2025-05-13 20:23:07.769095 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-05-13 20:23:07.769108 | orchestrator | 2025-05-13 20:23:07.769114 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-05-13 20:23:07.769120 | orchestrator | Tuesday 13 May 2025 20:23:01 +0000 (0:00:00.539) 0:09:18.120 *********** 2025-05-13 20:23:07.769127 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-05-13 20:23:07.769136 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-13 20:23:07.769143 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-13 20:23:07.769149 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-05-13 20:23:07.769155 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-05-13 20:23:07.769162 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-05-13 20:23:07.769168 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:23:07.769174 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-05-13 20:23:07.769180 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-13 20:23:07.769186 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-13 20:23:07.769193 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-05-13 20:23:07.769199 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-05-13 20:23:07.769205 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-05-13 20:23:07.769211 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:23:07.769217 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-05-13 20:23:07.769224 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-13 20:23:07.769242 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-13 20:23:07.769249 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-05-13 20:23:07.769255 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-05-13 20:23:07.769261 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-05-13 20:23:07.769267 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:23:07.769273 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-05-13 20:23:07.769280 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-13 20:23:07.769290 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-13 20:23:07.769296 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-05-13 20:23:07.769305 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-05-13 20:23:07.769314 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-05-13 20:23:07.769325 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.769331 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-05-13 20:23:07.769337 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-13 20:23:07.769343 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-13 20:23:07.769349 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-05-13 20:23:07.769355 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-05-13 20:23:07.769361 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-05-13 20:23:07.769368 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.769374 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-05-13 20:23:07.769380 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-13 20:23:07.769386 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-13 20:23:07.769392 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-05-13 20:23:07.769398 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-05-13 20:23:07.769404 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-05-13 20:23:07.769410 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.769421 | orchestrator | 2025-05-13 20:23:07.769428 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-05-13 20:23:07.769434 | orchestrator | 2025-05-13 20:23:07.769440 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-05-13 20:23:07.769446 | orchestrator | Tuesday 13 May 2025 20:23:02 +0000 (0:00:01.319) 0:09:19.439 *********** 2025-05-13 20:23:07.769452 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-05-13 20:23:07.769458 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-05-13 20:23:07.769464 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.769471 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-05-13 20:23:07.769477 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-05-13 20:23:07.769483 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.769489 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-05-13 20:23:07.769495 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-05-13 20:23:07.769501 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.769507 | orchestrator | 2025-05-13 20:23:07.769513 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-05-13 20:23:07.769519 | orchestrator | 2025-05-13 20:23:07.769525 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-05-13 20:23:07.769531 | orchestrator | Tuesday 13 May 2025 20:23:03 +0000 (0:00:00.692) 0:09:20.132 *********** 2025-05-13 20:23:07.769537 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.769543 | orchestrator | 2025-05-13 20:23:07.769549 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-05-13 20:23:07.769555 | orchestrator | 2025-05-13 20:23:07.769561 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-05-13 20:23:07.769567 | orchestrator | Tuesday 13 May 2025 20:23:04 +0000 (0:00:00.665) 0:09:20.797 *********** 2025-05-13 20:23:07.769572 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:23:07.769578 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:23:07.769584 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:23:07.769590 | orchestrator | 2025-05-13 20:23:07.769596 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:23:07.769603 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:23:07.769613 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-05-13 20:23:07.769620 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-13 20:23:07.769626 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-13 20:23:07.769633 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-13 20:23:07.769639 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-13 20:23:07.769645 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-13 20:23:07.769651 | orchestrator | 2025-05-13 20:23:07.769657 | orchestrator | 2025-05-13 20:23:07.769663 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:23:07.769670 | orchestrator | Tuesday 13 May 2025 20:23:04 +0000 (0:00:00.448) 0:09:21.246 *********** 2025-05-13 20:23:07.769676 | orchestrator | =============================================================================== 2025-05-13 20:23:07.769682 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 48.18s 2025-05-13 20:23:07.769693 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.59s 2025-05-13 20:23:07.769702 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 24.51s 2025-05-13 20:23:07.769709 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.88s 2025-05-13 20:23:07.769715 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 22.45s 2025-05-13 20:23:07.769721 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.41s 2025-05-13 20:23:07.769727 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 21.23s 2025-05-13 20:23:07.769733 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.67s 2025-05-13 20:23:07.769739 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 17.34s 2025-05-13 20:23:07.769745 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.30s 2025-05-13 20:23:07.769751 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.05s 2025-05-13 20:23:07.769757 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.87s 2025-05-13 20:23:07.769763 | orchestrator | nova : Restart nova-api container -------------------------------------- 11.81s 2025-05-13 20:23:07.769770 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.50s 2025-05-13 20:23:07.769776 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 11.25s 2025-05-13 20:23:07.769782 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.93s 2025-05-13 20:23:07.769788 | orchestrator | nova-cell : Create cell ------------------------------------------------ 10.91s 2025-05-13 20:23:07.769794 | orchestrator | nova-cell : Copying files for nova-ssh --------------------------------- 10.58s 2025-05-13 20:23:07.769801 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 9.97s 2025-05-13 20:23:07.769807 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 9.44s 2025-05-13 20:23:07.769813 | orchestrator | 2025-05-13 20:23:07 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:23:07.769819 | orchestrator | 2025-05-13 20:23:07 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:23:10.805792 | orchestrator | 2025-05-13 20:23:10 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:23:10.805894 | orchestrator | 2025-05-13 20:23:10 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:23:13.855185 | orchestrator | 2025-05-13 20:23:13 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:23:13.855323 | orchestrator | 2025-05-13 20:23:13 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:23:16.897395 | orchestrator | 2025-05-13 20:23:16 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:23:16.897510 | orchestrator | 2025-05-13 20:23:16 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:23:19.946173 | orchestrator | 2025-05-13 20:23:19 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:23:19.946277 | orchestrator | 2025-05-13 20:23:19 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:23:22.997504 | orchestrator | 2025-05-13 20:23:22 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:23:22.997627 | orchestrator | 2025-05-13 20:23:22 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:23:26.048904 | orchestrator | 2025-05-13 20:23:26 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:23:26.049040 | orchestrator | 2025-05-13 20:23:26 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:23:29.083045 | orchestrator | 2025-05-13 20:23:29 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:23:29.083180 | orchestrator | 2025-05-13 20:23:29 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:23:32.118629 | orchestrator | 2025-05-13 20:23:32 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:23:32.118727 | orchestrator | 2025-05-13 20:23:32 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:23:35.185269 | orchestrator | 2025-05-13 20:23:35 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:23:35.185386 | orchestrator | 2025-05-13 20:23:35 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:23:38.249808 | orchestrator | 2025-05-13 20:23:38 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:23:38.249896 | orchestrator | 2025-05-13 20:23:38 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:23:41.300432 | orchestrator | 2025-05-13 20:23:41 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:23:41.300542 | orchestrator | 2025-05-13 20:23:41 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:23:44.342792 | orchestrator | 2025-05-13 20:23:44 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:23:44.342909 | orchestrator | 2025-05-13 20:23:44 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:23:47.385683 | orchestrator | 2025-05-13 20:23:47 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:23:47.385796 | orchestrator | 2025-05-13 20:23:47 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:23:50.434359 | orchestrator | 2025-05-13 20:23:50 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:23:50.434496 | orchestrator | 2025-05-13 20:23:50 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:23:53.491321 | orchestrator | 2025-05-13 20:23:53 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:23:53.491425 | orchestrator | 2025-05-13 20:23:53 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:23:56.540963 | orchestrator | 2025-05-13 20:23:56 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:23:56.541125 | orchestrator | 2025-05-13 20:23:56 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:23:59.592468 | orchestrator | 2025-05-13 20:23:59 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:23:59.592567 | orchestrator | 2025-05-13 20:23:59 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:24:02.642004 | orchestrator | 2025-05-13 20:24:02 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:24:02.642099 | orchestrator | 2025-05-13 20:24:02 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:24:05.691959 | orchestrator | 2025-05-13 20:24:05 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:24:05.692108 | orchestrator | 2025-05-13 20:24:05 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:24:08.744612 | orchestrator | 2025-05-13 20:24:08 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:24:08.744722 | orchestrator | 2025-05-13 20:24:08 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:24:11.793998 | orchestrator | 2025-05-13 20:24:11 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:24:11.794178 | orchestrator | 2025-05-13 20:24:11 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:24:14.833551 | orchestrator | 2025-05-13 20:24:14 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:24:14.833711 | orchestrator | 2025-05-13 20:24:14 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:24:17.881495 | orchestrator | 2025-05-13 20:24:17 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:24:17.881598 | orchestrator | 2025-05-13 20:24:17 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:24:20.922537 | orchestrator | 2025-05-13 20:24:20 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:24:20.922615 | orchestrator | 2025-05-13 20:24:20 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:24:23.975854 | orchestrator | 2025-05-13 20:24:23 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:24:23.976016 | orchestrator | 2025-05-13 20:24:23 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:24:27.032246 | orchestrator | 2025-05-13 20:24:27 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:24:27.032370 | orchestrator | 2025-05-13 20:24:27 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:24:30.073918 | orchestrator | 2025-05-13 20:24:30 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:24:30.074151 | orchestrator | 2025-05-13 20:24:30 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:24:33.124862 | orchestrator | 2025-05-13 20:24:33 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:24:33.125040 | orchestrator | 2025-05-13 20:24:33 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:24:36.179935 | orchestrator | 2025-05-13 20:24:36 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:24:36.180066 | orchestrator | 2025-05-13 20:24:36 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:24:39.230210 | orchestrator | 2025-05-13 20:24:39 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:24:39.230304 | orchestrator | 2025-05-13 20:24:39 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:24:42.280103 | orchestrator | 2025-05-13 20:24:42 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:24:42.280169 | orchestrator | 2025-05-13 20:24:42 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:24:45.335738 | orchestrator | 2025-05-13 20:24:45 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:24:45.335816 | orchestrator | 2025-05-13 20:24:45 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:24:48.387879 | orchestrator | 2025-05-13 20:24:48 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:24:48.388021 | orchestrator | 2025-05-13 20:24:48 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:24:51.435475 | orchestrator | 2025-05-13 20:24:51 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:24:51.435601 | orchestrator | 2025-05-13 20:24:51 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:24:54.487337 | orchestrator | 2025-05-13 20:24:54 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:24:54.487459 | orchestrator | 2025-05-13 20:24:54 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:24:57.539607 | orchestrator | 2025-05-13 20:24:57 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:24:57.539693 | orchestrator | 2025-05-13 20:24:57 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:25:00.601581 | orchestrator | 2025-05-13 20:25:00 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:25:00.601700 | orchestrator | 2025-05-13 20:25:00 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:25:03.648728 | orchestrator | 2025-05-13 20:25:03 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:25:03.648826 | orchestrator | 2025-05-13 20:25:03 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:25:06.698380 | orchestrator | 2025-05-13 20:25:06 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:25:06.698527 | orchestrator | 2025-05-13 20:25:06 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:25:09.750230 | orchestrator | 2025-05-13 20:25:09 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state STARTED 2025-05-13 20:25:09.750325 | orchestrator | 2025-05-13 20:25:09 | INFO  | Wait 1 second(s) until the next check 2025-05-13 20:25:12.804667 | orchestrator | 2025-05-13 20:25:12 | INFO  | Task 2cd6ec30-ed17-4090-86c0-1267d99a9571 is in state SUCCESS 2025-05-13 20:25:12.806493 | orchestrator | 2025-05-13 20:25:12.806561 | orchestrator | 2025-05-13 20:25:12.806584 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:25:12.806735 | orchestrator | 2025-05-13 20:25:12.807606 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:25:12.807630 | orchestrator | Tuesday 13 May 2025 20:20:37 +0000 (0:00:00.268) 0:00:00.268 *********** 2025-05-13 20:25:12.807650 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:25:12.807668 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:25:12.807687 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:25:12.807705 | orchestrator | 2025-05-13 20:25:12.807723 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:25:12.807742 | orchestrator | Tuesday 13 May 2025 20:20:38 +0000 (0:00:00.324) 0:00:00.593 *********** 2025-05-13 20:25:12.807761 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-05-13 20:25:12.807780 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-05-13 20:25:12.807798 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-05-13 20:25:12.807817 | orchestrator | 2025-05-13 20:25:12.807836 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-05-13 20:25:12.807854 | orchestrator | 2025-05-13 20:25:12.807871 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-13 20:25:12.807890 | orchestrator | Tuesday 13 May 2025 20:20:38 +0000 (0:00:00.435) 0:00:01.029 *********** 2025-05-13 20:25:12.807935 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:25:12.807956 | orchestrator | 2025-05-13 20:25:12.807981 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-05-13 20:25:12.808001 | orchestrator | Tuesday 13 May 2025 20:20:38 +0000 (0:00:00.515) 0:00:01.544 *********** 2025-05-13 20:25:12.808019 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-05-13 20:25:12.808036 | orchestrator | 2025-05-13 20:25:12.808052 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-05-13 20:25:12.808068 | orchestrator | Tuesday 13 May 2025 20:20:42 +0000 (0:00:03.362) 0:00:04.907 *********** 2025-05-13 20:25:12.808084 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-05-13 20:25:12.808101 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-05-13 20:25:12.808117 | orchestrator | 2025-05-13 20:25:12.808133 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-05-13 20:25:12.808149 | orchestrator | Tuesday 13 May 2025 20:20:48 +0000 (0:00:06.360) 0:00:11.267 *********** 2025-05-13 20:25:12.808165 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-13 20:25:12.808182 | orchestrator | 2025-05-13 20:25:12.808252 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-05-13 20:25:12.808270 | orchestrator | Tuesday 13 May 2025 20:20:51 +0000 (0:00:03.119) 0:00:14.386 *********** 2025-05-13 20:25:12.808287 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-13 20:25:12.808304 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-13 20:25:12.808321 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-13 20:25:12.808337 | orchestrator | 2025-05-13 20:25:12.808352 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-05-13 20:25:12.808367 | orchestrator | Tuesday 13 May 2025 20:20:59 +0000 (0:00:08.091) 0:00:22.477 *********** 2025-05-13 20:25:12.808383 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-13 20:25:12.808400 | orchestrator | 2025-05-13 20:25:12.808415 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-05-13 20:25:12.808431 | orchestrator | Tuesday 13 May 2025 20:21:03 +0000 (0:00:03.214) 0:00:25.692 *********** 2025-05-13 20:25:12.808447 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-13 20:25:12.808463 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-13 20:25:12.808480 | orchestrator | 2025-05-13 20:25:12.808497 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-05-13 20:25:12.808512 | orchestrator | Tuesday 13 May 2025 20:21:10 +0000 (0:00:07.117) 0:00:32.809 *********** 2025-05-13 20:25:12.808528 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-05-13 20:25:12.808544 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-05-13 20:25:12.808560 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-05-13 20:25:12.808576 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-05-13 20:25:12.808592 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-05-13 20:25:12.808608 | orchestrator | 2025-05-13 20:25:12.808623 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-13 20:25:12.808639 | orchestrator | Tuesday 13 May 2025 20:21:25 +0000 (0:00:15.097) 0:00:47.907 *********** 2025-05-13 20:25:12.808655 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:25:12.808671 | orchestrator | 2025-05-13 20:25:12.808687 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-05-13 20:25:12.808703 | orchestrator | Tuesday 13 May 2025 20:21:25 +0000 (0:00:00.560) 0:00:48.467 *********** 2025-05-13 20:25:12.808714 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:25:12.808723 | orchestrator | 2025-05-13 20:25:12.808733 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-05-13 20:25:12.808742 | orchestrator | Tuesday 13 May 2025 20:21:30 +0000 (0:00:04.955) 0:00:53.423 *********** 2025-05-13 20:25:12.808752 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:25:12.808761 | orchestrator | 2025-05-13 20:25:12.808771 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-05-13 20:25:12.808836 | orchestrator | Tuesday 13 May 2025 20:21:35 +0000 (0:00:04.312) 0:00:57.736 *********** 2025-05-13 20:25:12.808847 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:25:12.808857 | orchestrator | 2025-05-13 20:25:12.808867 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-05-13 20:25:12.808876 | orchestrator | Tuesday 13 May 2025 20:21:38 +0000 (0:00:03.205) 0:01:00.941 *********** 2025-05-13 20:25:12.808886 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-05-13 20:25:12.808896 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-05-13 20:25:12.808905 | orchestrator | 2025-05-13 20:25:12.808959 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-05-13 20:25:12.808974 | orchestrator | Tuesday 13 May 2025 20:21:49 +0000 (0:00:10.946) 0:01:11.887 *********** 2025-05-13 20:25:12.808988 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-05-13 20:25:12.809019 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-05-13 20:25:12.809038 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-05-13 20:25:12.809055 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-05-13 20:25:12.809065 | orchestrator | 2025-05-13 20:25:12.809074 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-05-13 20:25:12.809084 | orchestrator | Tuesday 13 May 2025 20:22:05 +0000 (0:00:16.407) 0:01:28.295 *********** 2025-05-13 20:25:12.809093 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:25:12.809103 | orchestrator | 2025-05-13 20:25:12.809112 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-05-13 20:25:12.809122 | orchestrator | Tuesday 13 May 2025 20:22:10 +0000 (0:00:05.056) 0:01:33.352 *********** 2025-05-13 20:25:12.809131 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:25:12.809140 | orchestrator | 2025-05-13 20:25:12.809150 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-05-13 20:25:12.809159 | orchestrator | Tuesday 13 May 2025 20:22:16 +0000 (0:00:05.737) 0:01:39.089 *********** 2025-05-13 20:25:12.809169 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:25:12.809178 | orchestrator | 2025-05-13 20:25:12.809187 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-05-13 20:25:12.809197 | orchestrator | Tuesday 13 May 2025 20:22:16 +0000 (0:00:00.235) 0:01:39.325 *********** 2025-05-13 20:25:12.809206 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:25:12.809222 | orchestrator | 2025-05-13 20:25:12.809232 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-13 20:25:12.809242 | orchestrator | Tuesday 13 May 2025 20:22:21 +0000 (0:00:05.162) 0:01:44.487 *********** 2025-05-13 20:25:12.809251 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:25:12.809261 | orchestrator | 2025-05-13 20:25:12.809270 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-05-13 20:25:12.809279 | orchestrator | Tuesday 13 May 2025 20:22:24 +0000 (0:00:02.785) 0:01:47.272 *********** 2025-05-13 20:25:12.809289 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:25:12.809299 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:25:12.809308 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:25:12.809318 | orchestrator | 2025-05-13 20:25:12.809327 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-05-13 20:25:12.809336 | orchestrator | Tuesday 13 May 2025 20:22:30 +0000 (0:00:05.966) 0:01:53.239 *********** 2025-05-13 20:25:12.809346 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:25:12.809355 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:25:12.809365 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:25:12.809374 | orchestrator | 2025-05-13 20:25:12.809384 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-05-13 20:25:12.809393 | orchestrator | Tuesday 13 May 2025 20:22:36 +0000 (0:00:05.550) 0:01:58.790 *********** 2025-05-13 20:25:12.809402 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:25:12.809412 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:25:12.809421 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:25:12.809431 | orchestrator | 2025-05-13 20:25:12.809440 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-05-13 20:25:12.809449 | orchestrator | Tuesday 13 May 2025 20:22:36 +0000 (0:00:00.753) 0:01:59.544 *********** 2025-05-13 20:25:12.809459 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:25:12.809468 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:25:12.809478 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:25:12.809493 | orchestrator | 2025-05-13 20:25:12.809503 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-05-13 20:25:12.809512 | orchestrator | Tuesday 13 May 2025 20:22:39 +0000 (0:00:02.225) 0:02:01.770 *********** 2025-05-13 20:25:12.809522 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:25:12.809531 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:25:12.809540 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:25:12.809550 | orchestrator | 2025-05-13 20:25:12.809559 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-05-13 20:25:12.809568 | orchestrator | Tuesday 13 May 2025 20:22:40 +0000 (0:00:01.228) 0:02:02.998 *********** 2025-05-13 20:25:12.809578 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:25:12.809588 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:25:12.809597 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:25:12.809607 | orchestrator | 2025-05-13 20:25:12.809616 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-05-13 20:25:12.809626 | orchestrator | Tuesday 13 May 2025 20:22:41 +0000 (0:00:01.220) 0:02:04.219 *********** 2025-05-13 20:25:12.809635 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:25:12.809645 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:25:12.809654 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:25:12.809664 | orchestrator | 2025-05-13 20:25:12.809704 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-05-13 20:25:12.809715 | orchestrator | Tuesday 13 May 2025 20:22:43 +0000 (0:00:02.043) 0:02:06.263 *********** 2025-05-13 20:25:12.809724 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:25:12.809734 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:25:12.809743 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:25:12.809753 | orchestrator | 2025-05-13 20:25:12.809762 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-05-13 20:25:12.809772 | orchestrator | Tuesday 13 May 2025 20:22:45 +0000 (0:00:01.895) 0:02:08.158 *********** 2025-05-13 20:25:12.809781 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:25:12.809791 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:25:12.809800 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:25:12.809809 | orchestrator | 2025-05-13 20:25:12.809819 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-05-13 20:25:12.809828 | orchestrator | Tuesday 13 May 2025 20:22:46 +0000 (0:00:00.623) 0:02:08.782 *********** 2025-05-13 20:25:12.809838 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:25:12.809847 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:25:12.809856 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:25:12.809865 | orchestrator | 2025-05-13 20:25:12.809875 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-13 20:25:12.809884 | orchestrator | Tuesday 13 May 2025 20:22:49 +0000 (0:00:02.880) 0:02:11.662 *********** 2025-05-13 20:25:12.809894 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:25:12.809903 | orchestrator | 2025-05-13 20:25:12.809936 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-05-13 20:25:12.809946 | orchestrator | Tuesday 13 May 2025 20:22:49 +0000 (0:00:00.720) 0:02:12.383 *********** 2025-05-13 20:25:12.809956 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:25:12.809965 | orchestrator | 2025-05-13 20:25:12.809975 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-05-13 20:25:12.809985 | orchestrator | Tuesday 13 May 2025 20:22:53 +0000 (0:00:04.050) 0:02:16.434 *********** 2025-05-13 20:25:12.809994 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:25:12.810004 | orchestrator | 2025-05-13 20:25:12.810013 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-05-13 20:25:12.810055 | orchestrator | Tuesday 13 May 2025 20:22:57 +0000 (0:00:03.129) 0:02:19.563 *********** 2025-05-13 20:25:12.810065 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-05-13 20:25:12.810075 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-05-13 20:25:12.810091 | orchestrator | 2025-05-13 20:25:12.810101 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-05-13 20:25:12.810116 | orchestrator | Tuesday 13 May 2025 20:23:03 +0000 (0:00:06.804) 0:02:26.367 *********** 2025-05-13 20:25:12.810125 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:25:12.810135 | orchestrator | 2025-05-13 20:25:12.810145 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-05-13 20:25:12.810154 | orchestrator | Tuesday 13 May 2025 20:23:06 +0000 (0:00:03.169) 0:02:29.537 *********** 2025-05-13 20:25:12.810164 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:25:12.810173 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:25:12.810183 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:25:12.810193 | orchestrator | 2025-05-13 20:25:12.810202 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-05-13 20:25:12.810212 | orchestrator | Tuesday 13 May 2025 20:23:07 +0000 (0:00:00.307) 0:02:29.844 *********** 2025-05-13 20:25:12.810225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-13 20:25:12.810266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-13 20:25:12.810279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-13 20:25:12.810290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-13 20:25:12.810311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-13 20:25:12.810322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.810333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.810343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-13 20:25:12.810377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.810389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.810399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.810419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:25:12.810430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:25:12.810441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.810451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:25:12.810461 | orchestrator | 2025-05-13 20:25:12.810471 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-05-13 20:25:12.810481 | orchestrator | Tuesday 13 May 2025 20:23:09 +0000 (0:00:02.653) 0:02:32.498 *********** 2025-05-13 20:25:12.810494 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:25:12.810512 | orchestrator | 2025-05-13 20:25:12.810560 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-05-13 20:25:12.810576 | orchestrator | Tuesday 13 May 2025 20:23:10 +0000 (0:00:00.356) 0:02:32.854 *********** 2025-05-13 20:25:12.810591 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:25:12.810606 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:25:12.810623 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:25:12.810638 | orchestrator | 2025-05-13 20:25:12.810654 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-05-13 20:25:12.810672 | orchestrator | Tuesday 13 May 2025 20:23:10 +0000 (0:00:00.336) 0:02:33.191 *********** 2025-05-13 20:25:12.810690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-13 20:25:12.810719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-13 20:25:12.810736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-13 20:25:12.810747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-13 20:25:12.810757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:25:12.810767 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:25:12.810807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-13 20:25:12.810825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-13 20:25:12.810835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-13 20:25:12.810854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-13 20:25:12.810864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:25:12.810874 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:25:12.810884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-13 20:25:12.810948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-13 20:25:12.810970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-13 20:25:12.810980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-13 20:25:12.810995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:25:12.811005 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:25:12.811015 | orchestrator | 2025-05-13 20:25:12.811024 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-13 20:25:12.811034 | orchestrator | Tuesday 13 May 2025 20:23:11 +0000 (0:00:00.708) 0:02:33.900 *********** 2025-05-13 20:25:12.811044 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:25:12.811053 | orchestrator | 2025-05-13 20:25:12.811063 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-05-13 20:25:12.811072 | orchestrator | Tuesday 13 May 2025 20:23:11 +0000 (0:00:00.544) 0:02:34.445 *********** 2025-05-13 20:25:12.811083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-13 20:25:12.811116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-13 20:25:12.811135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-13 20:25:12.811145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-13 20:25:12.811159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-13 20:25:12.811169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-13 20:25:12.811179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.811189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.811212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.811222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.811232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.811246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.811257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:25:12.811266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:25:12.811290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:25:12.811300 | orchestrator | 2025-05-13 20:25:12.811310 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-05-13 20:25:12.811320 | orchestrator | Tuesday 13 May 2025 20:23:17 +0000 (0:00:05.125) 0:02:39.570 *********** 2025-05-13 20:25:12.811330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-13 20:25:12.811340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-13 20:25:12.811354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-13 20:25:12.811364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-13 20:25:12.811374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:25:12.811389 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:25:12.811405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-13 20:25:12.811415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-13 20:25:12.811425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-13 20:25:12.811439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-13 20:25:12.811449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:25:12.811459 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:25:12.811469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-13 20:25:12.811498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-13 20:25:12.811516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-13 20:25:12.811535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-13 20:25:12.811551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:25:12.811568 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:25:12.811582 | orchestrator | 2025-05-13 20:25:12.811603 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-05-13 20:25:12.811619 | orchestrator | Tuesday 13 May 2025 20:23:17 +0000 (0:00:00.676) 0:02:40.247 *********** 2025-05-13 20:25:12.811636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-13 20:25:12.811663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-13 20:25:12.811680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-13 20:25:12.811707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-13 20:25:12.811726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:25:12.811744 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:25:12.811768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-13 20:25:12.811779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-13 20:25:12.811799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-13 20:25:12.811815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-13 20:25:12.811840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:25:12.811857 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:25:12.811875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-13 20:25:12.811893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-13 20:25:12.811967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-13 20:25:12.811987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-13 20:25:12.811998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-13 20:25:12.812008 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:25:12.812017 | orchestrator | 2025-05-13 20:25:12.812027 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-05-13 20:25:12.812037 | orchestrator | Tuesday 13 May 2025 20:23:18 +0000 (0:00:00.872) 0:02:41.120 *********** 2025-05-13 20:25:12.812055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-13 20:25:12.812066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-13 20:25:12.812081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-13 20:25:12.812097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-13 20:25:12.812107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-13 20:25:12.812117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-13 20:25:12.812132 | orchestrator | changed: [testbed-no2025-05-13 20:25:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-13 20:25:12.812144 | orchestrator | de-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.812154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.812168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.812185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.812194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.812204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.812221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:25:12.812231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:25:12.812241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:25:12.812251 | orchestrator | 2025-05-13 20:25:12.812260 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-05-13 20:25:12.812270 | orchestrator | Tuesday 13 May 2025 20:23:23 +0000 (0:00:05.177) 0:02:46.297 *********** 2025-05-13 20:25:12.812285 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-05-13 20:25:12.812295 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-05-13 20:25:12.812309 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-05-13 20:25:12.812319 | orchestrator | 2025-05-13 20:25:12.812329 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-05-13 20:25:12.812338 | orchestrator | Tuesday 13 May 2025 20:23:25 +0000 (0:00:01.589) 0:02:47.887 *********** 2025-05-13 20:25:12.812348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-13 20:25:12.812359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-13 20:25:12.812375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-13 20:25:12.812386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-13 20:25:12.812396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-13 20:25:12.812416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-13 20:25:12.812426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.812436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.812451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.812462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.812472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.812487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.812501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:25:12.812511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:25:12.812521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:25:12.812531 | orchestrator | 2025-05-13 20:25:12.812540 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-05-13 20:25:12.812550 | orchestrator | Tuesday 13 May 2025 20:23:41 +0000 (0:00:16.169) 0:03:04.057 *********** 2025-05-13 20:25:12.812560 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:25:12.812569 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:25:12.812579 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:25:12.812588 | orchestrator | 2025-05-13 20:25:12.812598 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-05-13 20:25:12.812608 | orchestrator | Tuesday 13 May 2025 20:23:43 +0000 (0:00:01.634) 0:03:05.691 *********** 2025-05-13 20:25:12.812623 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-05-13 20:25:12.812633 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-05-13 20:25:12.812642 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-05-13 20:25:12.812652 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-05-13 20:25:12.812662 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-05-13 20:25:12.812672 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-05-13 20:25:12.812681 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-05-13 20:25:12.812691 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-05-13 20:25:12.812705 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-05-13 20:25:12.812715 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-05-13 20:25:12.812724 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-05-13 20:25:12.812734 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-05-13 20:25:12.812743 | orchestrator | 2025-05-13 20:25:12.812753 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-05-13 20:25:12.812762 | orchestrator | Tuesday 13 May 2025 20:23:48 +0000 (0:00:05.531) 0:03:11.223 *********** 2025-05-13 20:25:12.812772 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-05-13 20:25:12.812781 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-05-13 20:25:12.812791 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-05-13 20:25:12.812800 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-05-13 20:25:12.812810 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-05-13 20:25:12.812820 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-05-13 20:25:12.812829 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-05-13 20:25:12.812838 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-05-13 20:25:12.812848 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-05-13 20:25:12.812857 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-05-13 20:25:12.812867 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-05-13 20:25:12.812876 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-05-13 20:25:12.812886 | orchestrator | 2025-05-13 20:25:12.812895 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-05-13 20:25:12.812905 | orchestrator | Tuesday 13 May 2025 20:23:53 +0000 (0:00:05.005) 0:03:16.228 *********** 2025-05-13 20:25:12.812933 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-05-13 20:25:12.812943 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-05-13 20:25:12.812953 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-05-13 20:25:12.812962 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-05-13 20:25:12.812972 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-05-13 20:25:12.812982 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-05-13 20:25:12.812991 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-05-13 20:25:12.813001 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-05-13 20:25:12.813010 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-05-13 20:25:12.813020 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-05-13 20:25:12.813030 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-05-13 20:25:12.813039 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-05-13 20:25:12.813049 | orchestrator | 2025-05-13 20:25:12.813059 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-05-13 20:25:12.813068 | orchestrator | Tuesday 13 May 2025 20:23:58 +0000 (0:00:05.034) 0:03:21.263 *********** 2025-05-13 20:25:12.813079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-13 20:25:12.813104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-13 20:25:12.813115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-13 20:25:12.813130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-13 20:25:12.813140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-13 20:25:12.813150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-13 20:25:12.813160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.813181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.813191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.813201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.813215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.813226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-13 20:25:12.813236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:25:12.813251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:25:12.813267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-13 20:25:12.813277 | orchestrator | 2025-05-13 20:25:12.813287 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-13 20:25:12.813296 | orchestrator | Tuesday 13 May 2025 20:24:02 +0000 (0:00:03.427) 0:03:24.691 *********** 2025-05-13 20:25:12.813306 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:25:12.813316 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:25:12.813326 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:25:12.813335 | orchestrator | 2025-05-13 20:25:12.813345 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-05-13 20:25:12.813355 | orchestrator | Tuesday 13 May 2025 20:24:02 +0000 (0:00:00.304) 0:03:24.996 *********** 2025-05-13 20:25:12.813365 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:25:12.813374 | orchestrator | 2025-05-13 20:25:12.813384 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-05-13 20:25:12.813394 | orchestrator | Tuesday 13 May 2025 20:24:04 +0000 (0:00:02.343) 0:03:27.339 *********** 2025-05-13 20:25:12.813403 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:25:12.813413 | orchestrator | 2025-05-13 20:25:12.813422 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-05-13 20:25:12.813432 | orchestrator | Tuesday 13 May 2025 20:24:06 +0000 (0:00:01.988) 0:03:29.328 *********** 2025-05-13 20:25:12.813441 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:25:12.813451 | orchestrator | 2025-05-13 20:25:12.813461 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-05-13 20:25:12.813470 | orchestrator | Tuesday 13 May 2025 20:24:08 +0000 (0:00:02.046) 0:03:31.375 *********** 2025-05-13 20:25:12.813480 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:25:12.813489 | orchestrator | 2025-05-13 20:25:12.813499 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-05-13 20:25:12.813509 | orchestrator | Tuesday 13 May 2025 20:24:10 +0000 (0:00:02.050) 0:03:33.425 *********** 2025-05-13 20:25:12.813518 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:25:12.813528 | orchestrator | 2025-05-13 20:25:12.813538 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-05-13 20:25:12.813547 | orchestrator | Tuesday 13 May 2025 20:24:30 +0000 (0:00:19.716) 0:03:53.141 *********** 2025-05-13 20:25:12.813557 | orchestrator | 2025-05-13 20:25:12.813567 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-05-13 20:25:12.813581 | orchestrator | Tuesday 13 May 2025 20:24:30 +0000 (0:00:00.069) 0:03:53.211 *********** 2025-05-13 20:25:12.813591 | orchestrator | 2025-05-13 20:25:12.813601 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-05-13 20:25:12.813610 | orchestrator | Tuesday 13 May 2025 20:24:30 +0000 (0:00:00.066) 0:03:53.278 *********** 2025-05-13 20:25:12.813626 | orchestrator | 2025-05-13 20:25:12.813636 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-05-13 20:25:12.813645 | orchestrator | Tuesday 13 May 2025 20:24:30 +0000 (0:00:00.065) 0:03:53.343 *********** 2025-05-13 20:25:12.813655 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:25:12.813664 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:25:12.813674 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:25:12.813684 | orchestrator | 2025-05-13 20:25:12.813693 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-05-13 20:25:12.813703 | orchestrator | Tuesday 13 May 2025 20:24:42 +0000 (0:00:11.300) 0:04:04.644 *********** 2025-05-13 20:25:12.813713 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:25:12.813722 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:25:12.813732 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:25:12.813741 | orchestrator | 2025-05-13 20:25:12.813751 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-05-13 20:25:12.813761 | orchestrator | Tuesday 13 May 2025 20:24:53 +0000 (0:00:11.479) 0:04:16.123 *********** 2025-05-13 20:25:12.813770 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:25:12.813780 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:25:12.813790 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:25:12.813799 | orchestrator | 2025-05-13 20:25:12.813809 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-05-13 20:25:12.813818 | orchestrator | Tuesday 13 May 2025 20:24:59 +0000 (0:00:05.725) 0:04:21.849 *********** 2025-05-13 20:25:12.813828 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:25:12.813837 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:25:12.813847 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:25:12.813856 | orchestrator | 2025-05-13 20:25:12.813866 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-05-13 20:25:12.813876 | orchestrator | Tuesday 13 May 2025 20:25:04 +0000 (0:00:05.642) 0:04:27.491 *********** 2025-05-13 20:25:12.813885 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:25:12.813895 | orchestrator | changed: [testbed-node-1] 2025-05-13 20:25:12.813904 | orchestrator | changed: [testbed-node-2] 2025-05-13 20:25:12.813926 | orchestrator | 2025-05-13 20:25:12.813936 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:25:12.813946 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-13 20:25:12.813956 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 20:25:12.813966 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-13 20:25:12.813976 | orchestrator | 2025-05-13 20:25:12.813986 | orchestrator | 2025-05-13 20:25:12.813996 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:25:12.814010 | orchestrator | Tuesday 13 May 2025 20:25:10 +0000 (0:00:05.688) 0:04:33.179 *********** 2025-05-13 20:25:12.814051 | orchestrator | =============================================================================== 2025-05-13 20:25:12.814061 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 19.72s 2025-05-13 20:25:12.814070 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.41s 2025-05-13 20:25:12.814080 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.17s 2025-05-13 20:25:12.814089 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.10s 2025-05-13 20:25:12.814099 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.48s 2025-05-13 20:25:12.814108 | orchestrator | octavia : Restart octavia-api container -------------------------------- 11.30s 2025-05-13 20:25:12.814118 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.95s 2025-05-13 20:25:12.814134 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.09s 2025-05-13 20:25:12.814143 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.12s 2025-05-13 20:25:12.814153 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.80s 2025-05-13 20:25:12.814162 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.36s 2025-05-13 20:25:12.814172 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.97s 2025-05-13 20:25:12.814181 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.74s 2025-05-13 20:25:12.814190 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.73s 2025-05-13 20:25:12.814200 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.69s 2025-05-13 20:25:12.814209 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.64s 2025-05-13 20:25:12.814219 | orchestrator | octavia : Update Octavia health manager port host_id -------------------- 5.55s 2025-05-13 20:25:12.814228 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.53s 2025-05-13 20:25:12.814238 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.18s 2025-05-13 20:25:12.814247 | orchestrator | octavia : Update loadbalancer management subnet ------------------------- 5.16s 2025-05-13 20:25:15.856235 | orchestrator | 2025-05-13 20:25:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-13 20:25:18.906163 | orchestrator | 2025-05-13 20:25:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-13 20:25:21.956064 | orchestrator | 2025-05-13 20:25:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-13 20:25:25.015644 | orchestrator | 2025-05-13 20:25:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-13 20:25:28.060048 | orchestrator | 2025-05-13 20:25:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-13 20:25:31.096666 | orchestrator | 2025-05-13 20:25:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-13 20:25:34.141540 | orchestrator | 2025-05-13 20:25:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-13 20:25:37.179219 | orchestrator | 2025-05-13 20:25:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-13 20:25:40.220100 | orchestrator | 2025-05-13 20:25:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-13 20:25:43.259998 | orchestrator | 2025-05-13 20:25:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-13 20:25:46.315558 | orchestrator | 2025-05-13 20:25:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-13 20:25:49.350365 | orchestrator | 2025-05-13 20:25:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-13 20:25:52.399406 | orchestrator | 2025-05-13 20:25:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-13 20:25:55.452936 | orchestrator | 2025-05-13 20:25:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-13 20:25:58.502523 | orchestrator | 2025-05-13 20:25:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-13 20:26:01.544623 | orchestrator | 2025-05-13 20:26:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-13 20:26:04.596272 | orchestrator | 2025-05-13 20:26:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-13 20:26:07.639100 | orchestrator | 2025-05-13 20:26:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-13 20:26:10.686273 | orchestrator | 2025-05-13 20:26:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-13 20:26:13.736847 | orchestrator | 2025-05-13 20:26:14.014455 | orchestrator | 2025-05-13 20:26:14.021437 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Tue May 13 20:26:14 UTC 2025 2025-05-13 20:26:14.021543 | orchestrator | 2025-05-13 20:26:14.466683 | orchestrator | ok: Runtime: 0:36:05.975029 2025-05-13 20:26:14.736854 | 2025-05-13 20:26:14.737029 | TASK [Bootstrap services] 2025-05-13 20:26:15.497291 | orchestrator | 2025-05-13 20:26:15.497496 | orchestrator | # BOOTSTRAP 2025-05-13 20:26:15.497519 | orchestrator | 2025-05-13 20:26:15.497534 | orchestrator | + set -e 2025-05-13 20:26:15.497547 | orchestrator | + echo 2025-05-13 20:26:15.497560 | orchestrator | + echo '# BOOTSTRAP' 2025-05-13 20:26:15.497575 | orchestrator | + echo 2025-05-13 20:26:15.497613 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-05-13 20:26:15.505984 | orchestrator | + set -e 2025-05-13 20:26:15.506130 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-05-13 20:26:17.580801 | orchestrator | 2025-05-13 20:26:17 | INFO  | It takes a moment until task fd5118c2-3c72-408e-95df-a6874c863fa7 (flavor-manager) has been started and output is visible here. 2025-05-13 20:26:21.658072 | orchestrator | 2025-05-13 20:26:21 | INFO  | Flavor SCS-1V-4 created 2025-05-13 20:26:21.842880 | orchestrator | 2025-05-13 20:26:21 | INFO  | Flavor SCS-2V-8 created 2025-05-13 20:26:22.140934 | orchestrator | 2025-05-13 20:26:22 | INFO  | Flavor SCS-4V-16 created 2025-05-13 20:26:22.300604 | orchestrator | 2025-05-13 20:26:22 | INFO  | Flavor SCS-8V-32 created 2025-05-13 20:26:22.443400 | orchestrator | 2025-05-13 20:26:22 | INFO  | Flavor SCS-1V-2 created 2025-05-13 20:26:22.602156 | orchestrator | 2025-05-13 20:26:22 | INFO  | Flavor SCS-2V-4 created 2025-05-13 20:26:22.741384 | orchestrator | 2025-05-13 20:26:22 | INFO  | Flavor SCS-4V-8 created 2025-05-13 20:26:22.879334 | orchestrator | 2025-05-13 20:26:22 | INFO  | Flavor SCS-8V-16 created 2025-05-13 20:26:23.024116 | orchestrator | 2025-05-13 20:26:23 | INFO  | Flavor SCS-16V-32 created 2025-05-13 20:26:23.151055 | orchestrator | 2025-05-13 20:26:23 | INFO  | Flavor SCS-1V-8 created 2025-05-13 20:26:23.276635 | orchestrator | 2025-05-13 20:26:23 | INFO  | Flavor SCS-2V-16 created 2025-05-13 20:26:23.419669 | orchestrator | 2025-05-13 20:26:23 | INFO  | Flavor SCS-4V-32 created 2025-05-13 20:26:23.567692 | orchestrator | 2025-05-13 20:26:23 | INFO  | Flavor SCS-1L-1 created 2025-05-13 20:26:23.709618 | orchestrator | 2025-05-13 20:26:23 | INFO  | Flavor SCS-2V-4-20s created 2025-05-13 20:26:23.861743 | orchestrator | 2025-05-13 20:26:23 | INFO  | Flavor SCS-4V-16-100s created 2025-05-13 20:26:23.999446 | orchestrator | 2025-05-13 20:26:23 | INFO  | Flavor SCS-1V-4-10 created 2025-05-13 20:26:24.141348 | orchestrator | 2025-05-13 20:26:24 | INFO  | Flavor SCS-2V-8-20 created 2025-05-13 20:26:24.277560 | orchestrator | 2025-05-13 20:26:24 | INFO  | Flavor SCS-4V-16-50 created 2025-05-13 20:26:24.417505 | orchestrator | 2025-05-13 20:26:24 | INFO  | Flavor SCS-8V-32-100 created 2025-05-13 20:26:24.560341 | orchestrator | 2025-05-13 20:26:24 | INFO  | Flavor SCS-1V-2-5 created 2025-05-13 20:26:24.704377 | orchestrator | 2025-05-13 20:26:24 | INFO  | Flavor SCS-2V-4-10 created 2025-05-13 20:26:24.843761 | orchestrator | 2025-05-13 20:26:24 | INFO  | Flavor SCS-4V-8-20 created 2025-05-13 20:26:25.002207 | orchestrator | 2025-05-13 20:26:24 | INFO  | Flavor SCS-8V-16-50 created 2025-05-13 20:26:25.175414 | orchestrator | 2025-05-13 20:26:25 | INFO  | Flavor SCS-16V-32-100 created 2025-05-13 20:26:25.310629 | orchestrator | 2025-05-13 20:26:25 | INFO  | Flavor SCS-1V-8-20 created 2025-05-13 20:26:25.485715 | orchestrator | 2025-05-13 20:26:25 | INFO  | Flavor SCS-2V-16-50 created 2025-05-13 20:26:25.627627 | orchestrator | 2025-05-13 20:26:25 | INFO  | Flavor SCS-4V-32-100 created 2025-05-13 20:26:25.768024 | orchestrator | 2025-05-13 20:26:25 | INFO  | Flavor SCS-1L-1-5 created 2025-05-13 20:26:27.667830 | orchestrator | 2025-05-13 20:26:27 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-05-13 20:26:27.724731 | orchestrator | 2025-05-13 20:26:27 | INFO  | Task 330162fc-75b8-44e2-932e-cc16f89f7ad9 (bootstrap-basic) was prepared for execution. 2025-05-13 20:26:27.724873 | orchestrator | 2025-05-13 20:26:27 | INFO  | It takes a moment until task 330162fc-75b8-44e2-932e-cc16f89f7ad9 (bootstrap-basic) has been started and output is visible here. 2025-05-13 20:26:33.262490 | orchestrator | 2025-05-13 20:26:33.262609 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-05-13 20:26:33.262626 | orchestrator | 2025-05-13 20:26:33.262937 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-13 20:26:33.264831 | orchestrator | Tuesday 13 May 2025 20:26:33 +0000 (0:00:01.661) 0:00:01.661 *********** 2025-05-13 20:26:36.597675 | orchestrator | ok: [localhost] 2025-05-13 20:26:36.597779 | orchestrator | 2025-05-13 20:26:36.598982 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-05-13 20:26:36.599445 | orchestrator | Tuesday 13 May 2025 20:26:36 +0000 (0:00:03.336) 0:00:04.998 *********** 2025-05-13 20:26:45.547585 | orchestrator | ok: [localhost] 2025-05-13 20:26:45.548639 | orchestrator | 2025-05-13 20:26:45.548682 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-05-13 20:26:45.548784 | orchestrator | Tuesday 13 May 2025 20:26:45 +0000 (0:00:08.951) 0:00:13.950 *********** 2025-05-13 20:26:53.695213 | orchestrator | changed: [localhost] 2025-05-13 20:26:53.695519 | orchestrator | 2025-05-13 20:26:53.696272 | orchestrator | TASK [Get volume type local] *************************************************** 2025-05-13 20:26:53.697560 | orchestrator | Tuesday 13 May 2025 20:26:53 +0000 (0:00:08.145) 0:00:22.096 *********** 2025-05-13 20:27:01.700116 | orchestrator | ok: [localhost] 2025-05-13 20:27:01.701138 | orchestrator | 2025-05-13 20:27:01.701731 | orchestrator | TASK [Create volume type local] ************************************************ 2025-05-13 20:27:01.703348 | orchestrator | Tuesday 13 May 2025 20:27:01 +0000 (0:00:08.003) 0:00:30.099 *********** 2025-05-13 20:27:09.652454 | orchestrator | changed: [localhost] 2025-05-13 20:27:09.652568 | orchestrator | 2025-05-13 20:27:09.653067 | orchestrator | TASK [Create public network] *************************************************** 2025-05-13 20:27:09.653745 | orchestrator | Tuesday 13 May 2025 20:27:09 +0000 (0:00:07.952) 0:00:38.052 *********** 2025-05-13 20:27:15.982544 | orchestrator | changed: [localhost] 2025-05-13 20:27:15.982986 | orchestrator | 2025-05-13 20:27:15.983559 | orchestrator | TASK [Set public network to default] ******************************************* 2025-05-13 20:27:15.984678 | orchestrator | Tuesday 13 May 2025 20:27:15 +0000 (0:00:06.331) 0:00:44.383 *********** 2025-05-13 20:27:23.041201 | orchestrator | changed: [localhost] 2025-05-13 20:27:23.041346 | orchestrator | 2025-05-13 20:27:23.041385 | orchestrator | TASK [Create public subnet] **************************************************** 2025-05-13 20:27:23.042116 | orchestrator | Tuesday 13 May 2025 20:27:23 +0000 (0:00:07.052) 0:00:51.436 *********** 2025-05-13 20:27:28.268170 | orchestrator | changed: [localhost] 2025-05-13 20:27:28.268252 | orchestrator | 2025-05-13 20:27:28.268561 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-05-13 20:27:28.270851 | orchestrator | Tuesday 13 May 2025 20:27:28 +0000 (0:00:05.233) 0:00:56.670 *********** 2025-05-13 20:27:33.391712 | orchestrator | changed: [localhost] 2025-05-13 20:27:33.394991 | orchestrator | 2025-05-13 20:27:33.395052 | orchestrator | TASK [Create manager role] ***************************************************** 2025-05-13 20:27:33.395061 | orchestrator | Tuesday 13 May 2025 20:27:33 +0000 (0:00:05.121) 0:01:01.792 *********** 2025-05-13 20:27:38.380249 | orchestrator | ok: [localhost] 2025-05-13 20:27:38.380339 | orchestrator | 2025-05-13 20:27:38.381739 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:27:38.381799 | orchestrator | 2025-05-13 20:27:38 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 20:27:38.381810 | orchestrator | 2025-05-13 20:27:38 | INFO  | Please wait and do not abort execution. 2025-05-13 20:27:38.382771 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-13 20:27:38.384277 | orchestrator | 2025-05-13 20:27:38.385506 | orchestrator | 2025-05-13 20:27:38.386561 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:27:38.387304 | orchestrator | Tuesday 13 May 2025 20:27:38 +0000 (0:00:04.989) 0:01:06.781 *********** 2025-05-13 20:27:38.390144 | orchestrator | =============================================================================== 2025-05-13 20:27:38.391582 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.95s 2025-05-13 20:27:38.392102 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.15s 2025-05-13 20:27:38.392885 | orchestrator | Get volume type local --------------------------------------------------- 8.00s 2025-05-13 20:27:38.393463 | orchestrator | Create volume type local ------------------------------------------------ 7.95s 2025-05-13 20:27:38.394111 | orchestrator | Set public network to default ------------------------------------------- 7.05s 2025-05-13 20:27:38.394779 | orchestrator | Create public network --------------------------------------------------- 6.33s 2025-05-13 20:27:38.395439 | orchestrator | Create public subnet ---------------------------------------------------- 5.23s 2025-05-13 20:27:38.396589 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 5.12s 2025-05-13 20:27:38.397986 | orchestrator | Create manager role ----------------------------------------------------- 4.99s 2025-05-13 20:27:38.398749 | orchestrator | Gathering Facts --------------------------------------------------------- 3.34s 2025-05-13 20:27:40.804544 | orchestrator | 2025-05-13 20:27:40 | INFO  | It takes a moment until task 48827f89-6f29-4434-ba22-7a9f1e6eef1c (image-manager) has been started and output is visible here. 2025-05-13 20:27:44.513703 | orchestrator | 2025-05-13 20:27:44 | INFO  | Processing image 'Cirros 0.6.2' 2025-05-13 20:27:44.753689 | orchestrator | 2025-05-13 20:27:44 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-05-13 20:27:44.754264 | orchestrator | 2025-05-13 20:27:44 | INFO  | Importing image Cirros 0.6.2 2025-05-13 20:27:44.754986 | orchestrator | 2025-05-13 20:27:44 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-05-13 20:27:46.983304 | orchestrator | 2025-05-13 20:27:46 | INFO  | Waiting for image to leave queued state... 2025-05-13 20:27:49.053275 | orchestrator | 2025-05-13 20:27:49 | INFO  | Waiting for import to complete... 2025-05-13 20:27:59.195623 | orchestrator | 2025-05-13 20:27:59 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-05-13 20:27:59.388246 | orchestrator | 2025-05-13 20:27:59 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-05-13 20:27:59.388358 | orchestrator | 2025-05-13 20:27:59 | INFO  | Setting internal_version = 0.6.2 2025-05-13 20:27:59.388382 | orchestrator | 2025-05-13 20:27:59 | INFO  | Setting image_original_user = cirros 2025-05-13 20:27:59.389809 | orchestrator | 2025-05-13 20:27:59 | INFO  | Adding tag os:cirros 2025-05-13 20:27:59.645562 | orchestrator | 2025-05-13 20:27:59 | INFO  | Setting property architecture: x86_64 2025-05-13 20:27:59.941925 | orchestrator | 2025-05-13 20:27:59 | INFO  | Setting property hw_disk_bus: scsi 2025-05-13 20:28:00.191898 | orchestrator | 2025-05-13 20:28:00 | INFO  | Setting property hw_rng_model: virtio 2025-05-13 20:28:00.409392 | orchestrator | 2025-05-13 20:28:00 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-05-13 20:28:00.629932 | orchestrator | 2025-05-13 20:28:00 | INFO  | Setting property hw_watchdog_action: reset 2025-05-13 20:28:00.899956 | orchestrator | 2025-05-13 20:28:00 | INFO  | Setting property hypervisor_type: qemu 2025-05-13 20:28:01.104153 | orchestrator | 2025-05-13 20:28:01 | INFO  | Setting property os_distro: cirros 2025-05-13 20:28:01.300312 | orchestrator | 2025-05-13 20:28:01 | INFO  | Setting property replace_frequency: never 2025-05-13 20:28:01.527588 | orchestrator | 2025-05-13 20:28:01 | INFO  | Setting property uuid_validity: none 2025-05-13 20:28:01.735383 | orchestrator | 2025-05-13 20:28:01 | INFO  | Setting property provided_until: none 2025-05-13 20:28:01.928951 | orchestrator | 2025-05-13 20:28:01 | INFO  | Setting property image_description: Cirros 2025-05-13 20:28:02.151969 | orchestrator | 2025-05-13 20:28:02 | INFO  | Setting property image_name: Cirros 2025-05-13 20:28:02.374503 | orchestrator | 2025-05-13 20:28:02 | INFO  | Setting property internal_version: 0.6.2 2025-05-13 20:28:02.609419 | orchestrator | 2025-05-13 20:28:02 | INFO  | Setting property image_original_user: cirros 2025-05-13 20:28:02.829401 | orchestrator | 2025-05-13 20:28:02 | INFO  | Setting property os_version: 0.6.2 2025-05-13 20:28:03.050489 | orchestrator | 2025-05-13 20:28:03 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-05-13 20:28:03.227929 | orchestrator | 2025-05-13 20:28:03 | INFO  | Setting property image_build_date: 2023-05-30 2025-05-13 20:28:03.451113 | orchestrator | 2025-05-13 20:28:03 | INFO  | Checking status of 'Cirros 0.6.2' 2025-05-13 20:28:03.451749 | orchestrator | 2025-05-13 20:28:03 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-05-13 20:28:03.452729 | orchestrator | 2025-05-13 20:28:03 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-05-13 20:28:03.658983 | orchestrator | 2025-05-13 20:28:03 | INFO  | Processing image 'Cirros 0.6.3' 2025-05-13 20:28:03.886130 | orchestrator | 2025-05-13 20:28:03 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-05-13 20:28:03.887176 | orchestrator | 2025-05-13 20:28:03 | INFO  | Importing image Cirros 0.6.3 2025-05-13 20:28:03.889410 | orchestrator | 2025-05-13 20:28:03 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-05-13 20:28:05.335370 | orchestrator | 2025-05-13 20:28:05 | INFO  | Waiting for import to complete... 2025-05-13 20:28:15.458134 | orchestrator | 2025-05-13 20:28:15 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-05-13 20:28:15.862262 | orchestrator | 2025-05-13 20:28:15 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-05-13 20:28:15.863405 | orchestrator | 2025-05-13 20:28:15 | INFO  | Setting internal_version = 0.6.3 2025-05-13 20:28:15.864487 | orchestrator | 2025-05-13 20:28:15 | INFO  | Setting image_original_user = cirros 2025-05-13 20:28:15.865635 | orchestrator | 2025-05-13 20:28:15 | INFO  | Adding tag os:cirros 2025-05-13 20:28:16.070196 | orchestrator | 2025-05-13 20:28:16 | INFO  | Setting property architecture: x86_64 2025-05-13 20:28:16.349949 | orchestrator | 2025-05-13 20:28:16 | INFO  | Setting property hw_disk_bus: scsi 2025-05-13 20:28:16.579412 | orchestrator | 2025-05-13 20:28:16 | INFO  | Setting property hw_rng_model: virtio 2025-05-13 20:28:16.781982 | orchestrator | 2025-05-13 20:28:16 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-05-13 20:28:17.021443 | orchestrator | 2025-05-13 20:28:17 | INFO  | Setting property hw_watchdog_action: reset 2025-05-13 20:28:17.216660 | orchestrator | 2025-05-13 20:28:17 | INFO  | Setting property hypervisor_type: qemu 2025-05-13 20:28:17.422916 | orchestrator | 2025-05-13 20:28:17 | INFO  | Setting property os_distro: cirros 2025-05-13 20:28:17.648595 | orchestrator | 2025-05-13 20:28:17 | INFO  | Setting property replace_frequency: never 2025-05-13 20:28:17.829767 | orchestrator | 2025-05-13 20:28:17 | INFO  | Setting property uuid_validity: none 2025-05-13 20:28:18.045601 | orchestrator | 2025-05-13 20:28:18 | INFO  | Setting property provided_until: none 2025-05-13 20:28:18.275576 | orchestrator | 2025-05-13 20:28:18 | INFO  | Setting property image_description: Cirros 2025-05-13 20:28:18.479578 | orchestrator | 2025-05-13 20:28:18 | INFO  | Setting property image_name: Cirros 2025-05-13 20:28:18.690405 | orchestrator | 2025-05-13 20:28:18 | INFO  | Setting property internal_version: 0.6.3 2025-05-13 20:28:18.911591 | orchestrator | 2025-05-13 20:28:18 | INFO  | Setting property image_original_user: cirros 2025-05-13 20:28:19.100858 | orchestrator | 2025-05-13 20:28:19 | INFO  | Setting property os_version: 0.6.3 2025-05-13 20:28:19.320246 | orchestrator | 2025-05-13 20:28:19 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-05-13 20:28:19.610704 | orchestrator | 2025-05-13 20:28:19 | INFO  | Setting property image_build_date: 2024-09-26 2025-05-13 20:28:19.817520 | orchestrator | 2025-05-13 20:28:19 | INFO  | Checking status of 'Cirros 0.6.3' 2025-05-13 20:28:19.819130 | orchestrator | 2025-05-13 20:28:19 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-05-13 20:28:19.820873 | orchestrator | 2025-05-13 20:28:19 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-05-13 20:28:20.788423 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-05-13 20:28:22.888312 | orchestrator | 2025-05-13 20:28:22 | INFO  | date: 2025-05-07 2025-05-13 20:28:22.888418 | orchestrator | 2025-05-13 20:28:22 | INFO  | image: octavia-amphora-haproxy-2024.2.20250507.qcow2 2025-05-13 20:28:22.888435 | orchestrator | 2025-05-13 20:28:22 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250507.qcow2 2025-05-13 20:28:22.888471 | orchestrator | 2025-05-13 20:28:22 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250507.qcow2.CHECKSUM 2025-05-13 20:28:22.926434 | orchestrator | 2025-05-13 20:28:22 | INFO  | checksum: c20b3eccc9fa67100ece69376214f12441dc8ba740779c4f796663f77ded808e 2025-05-13 20:28:23.000153 | orchestrator | 2025-05-13 20:28:22 | INFO  | It takes a moment until task c4d200db-f724-4658-8b0d-1c4f6be187a8 (image-manager) has been started and output is visible here. 2025-05-13 20:28:25.375782 | orchestrator | 2025-05-13 20:28:25 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-05-07' 2025-05-13 20:28:25.398364 | orchestrator | 2025-05-13 20:28:25 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250507.qcow2: 200 2025-05-13 20:28:25.398444 | orchestrator | 2025-05-13 20:28:25 | INFO  | Importing image OpenStack Octavia Amphora 2025-05-07 2025-05-13 20:28:25.399584 | orchestrator | 2025-05-13 20:28:25 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250507.qcow2 2025-05-13 20:28:26.528908 | orchestrator | 2025-05-13 20:28:26 | INFO  | Waiting for image to leave queued state... 2025-05-13 20:28:28.569605 | orchestrator | 2025-05-13 20:28:28 | INFO  | Waiting for import to complete... 2025-05-13 20:28:38.655878 | orchestrator | 2025-05-13 20:28:38 | INFO  | Waiting for import to complete... 2025-05-13 20:28:48.738292 | orchestrator | 2025-05-13 20:28:48 | INFO  | Waiting for import to complete... 2025-05-13 20:28:59.018141 | orchestrator | 2025-05-13 20:28:59 | INFO  | Waiting for import to complete... 2025-05-13 20:29:09.346955 | orchestrator | 2025-05-13 20:29:09 | INFO  | Waiting for import to complete... 2025-05-13 20:29:19.482336 | orchestrator | 2025-05-13 20:29:19 | INFO  | Import of 'OpenStack Octavia Amphora 2025-05-07' successfully completed, reloading images 2025-05-13 20:29:19.856980 | orchestrator | 2025-05-13 20:29:19 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-05-07' 2025-05-13 20:29:19.857864 | orchestrator | 2025-05-13 20:29:19 | INFO  | Setting internal_version = 2025-05-07 2025-05-13 20:29:19.859328 | orchestrator | 2025-05-13 20:29:19 | INFO  | Setting image_original_user = ubuntu 2025-05-13 20:29:19.861736 | orchestrator | 2025-05-13 20:29:19 | INFO  | Adding tag amphora 2025-05-13 20:29:20.072470 | orchestrator | 2025-05-13 20:29:20 | INFO  | Adding tag os:ubuntu 2025-05-13 20:29:20.361023 | orchestrator | 2025-05-13 20:29:20 | INFO  | Setting property architecture: x86_64 2025-05-13 20:29:20.606255 | orchestrator | 2025-05-13 20:29:20 | INFO  | Setting property hw_disk_bus: scsi 2025-05-13 20:29:20.790935 | orchestrator | 2025-05-13 20:29:20 | INFO  | Setting property hw_rng_model: virtio 2025-05-13 20:29:21.026392 | orchestrator | 2025-05-13 20:29:21 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-05-13 20:29:21.234571 | orchestrator | 2025-05-13 20:29:21 | INFO  | Setting property hw_watchdog_action: reset 2025-05-13 20:29:21.403469 | orchestrator | 2025-05-13 20:29:21 | INFO  | Setting property hypervisor_type: qemu 2025-05-13 20:29:21.598563 | orchestrator | 2025-05-13 20:29:21 | INFO  | Setting property os_distro: ubuntu 2025-05-13 20:29:21.813039 | orchestrator | 2025-05-13 20:29:21 | INFO  | Setting property replace_frequency: quarterly 2025-05-13 20:29:21.985145 | orchestrator | 2025-05-13 20:29:21 | INFO  | Setting property uuid_validity: last-1 2025-05-13 20:29:22.161014 | orchestrator | 2025-05-13 20:29:22 | INFO  | Setting property provided_until: none 2025-05-13 20:29:22.337968 | orchestrator | 2025-05-13 20:29:22 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-05-13 20:29:22.569013 | orchestrator | 2025-05-13 20:29:22 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-05-13 20:29:22.761533 | orchestrator | 2025-05-13 20:29:22 | INFO  | Setting property internal_version: 2025-05-07 2025-05-13 20:29:22.975314 | orchestrator | 2025-05-13 20:29:22 | INFO  | Setting property image_original_user: ubuntu 2025-05-13 20:29:23.202514 | orchestrator | 2025-05-13 20:29:23 | INFO  | Setting property os_version: 2025-05-07 2025-05-13 20:29:23.405005 | orchestrator | 2025-05-13 20:29:23 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250507.qcow2 2025-05-13 20:29:23.641386 | orchestrator | 2025-05-13 20:29:23 | INFO  | Setting property image_build_date: 2025-05-07 2025-05-13 20:29:23.835198 | orchestrator | 2025-05-13 20:29:23 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-05-07' 2025-05-13 20:29:23.835997 | orchestrator | 2025-05-13 20:29:23 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-05-07' 2025-05-13 20:29:24.016592 | orchestrator | 2025-05-13 20:29:24 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-05-13 20:29:24.016715 | orchestrator | 2025-05-13 20:29:24 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-05-13 20:29:24.016798 | orchestrator | 2025-05-13 20:29:24 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-05-13 20:29:24.017556 | orchestrator | 2025-05-13 20:29:24 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-05-13 20:29:24.464896 | orchestrator | ok: Runtime: 0:03:09.247005 2025-05-13 20:29:24.494095 | 2025-05-13 20:29:24.494275 | TASK [Run checks] 2025-05-13 20:29:25.204711 | orchestrator | + set -e 2025-05-13 20:29:25.204950 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-13 20:29:25.205022 | orchestrator | ++ export INTERACTIVE=false 2025-05-13 20:29:25.205040 | orchestrator | ++ INTERACTIVE=false 2025-05-13 20:29:25.205051 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-13 20:29:25.205060 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-13 20:29:25.205071 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-05-13 20:29:25.205489 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-05-13 20:29:25.209165 | orchestrator | 2025-05-13 20:29:25.209227 | orchestrator | # CHECK 2025-05-13 20:29:25.209236 | orchestrator | 2025-05-13 20:29:25.209243 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-13 20:29:25.209254 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-13 20:29:25.209261 | orchestrator | + echo 2025-05-13 20:29:25.209267 | orchestrator | + echo '# CHECK' 2025-05-13 20:29:25.209274 | orchestrator | + echo 2025-05-13 20:29:25.209285 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-05-13 20:29:25.209447 | orchestrator | ++ semver latest 5.0.0 2025-05-13 20:29:25.257169 | orchestrator | 2025-05-13 20:29:25.257267 | orchestrator | ## Containers @ testbed-manager 2025-05-13 20:29:25.257278 | orchestrator | 2025-05-13 20:29:25.257287 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-13 20:29:25.257295 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-13 20:29:25.257303 | orchestrator | + echo 2025-05-13 20:29:25.257311 | orchestrator | + echo '## Containers @ testbed-manager' 2025-05-13 20:29:25.257378 | orchestrator | + echo 2025-05-13 20:29:25.257387 | orchestrator | + osism container testbed-manager ps 2025-05-13 20:29:27.171463 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-05-13 20:29:27.171583 | orchestrator | fcd3b4646a7d registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_blackbox_exporter 2025-05-13 20:29:27.171603 | orchestrator | 2672b110fc31 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_alertmanager 2025-05-13 20:29:27.171612 | orchestrator | 384022ee0646 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-05-13 20:29:27.171627 | orchestrator | 4da174f96733 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-05-13 20:29:27.171636 | orchestrator | 7214fcbee3f5 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_server 2025-05-13 20:29:27.171650 | orchestrator | 8743aab22bf9 registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 17 minutes ago Up 17 minutes cephclient 2025-05-13 20:29:27.171659 | orchestrator | 4ca159a70a1e registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-05-13 20:29:27.171850 | orchestrator | abbfec56252e registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-05-13 20:29:27.171866 | orchestrator | 3d13abda6dfd phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 30 minutes ago Up 30 minutes (healthy) 80/tcp phpmyadmin 2025-05-13 20:29:27.171898 | orchestrator | 491e3bdf9800 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-05-13 20:29:27.171907 | orchestrator | 188c41a29eca registry.osism.tech/osism/homer:v25.05.1 "/bin/sh /entrypoint…" 31 minutes ago Up 31 minutes (healthy) 8080/tcp homer 2025-05-13 20:29:27.171915 | orchestrator | 49157ed7984f registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 31 minutes ago Up 31 minutes openstackclient 2025-05-13 20:29:27.171923 | orchestrator | e2047b906db3 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 39 minutes ago Up 39 minutes (healthy) osism-ansible 2025-05-13 20:29:27.171931 | orchestrator | 5f626a3a5351 ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 53 minutes ago Up 52 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-05-13 20:29:27.171944 | orchestrator | 2c59bdcc3b9e registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 59 minutes ago Up 58 minutes (healthy) manager-inventory_reconciler-1 2025-05-13 20:29:27.171953 | orchestrator | fc16e8bf9858 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 59 minutes ago Up 59 minutes (healthy) osism-kubernetes 2025-05-13 20:29:27.171961 | orchestrator | 8e1ac6a57349 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 59 minutes ago Up 59 minutes (healthy) ceph-ansible 2025-05-13 20:29:27.171969 | orchestrator | 7513f5831a73 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 59 minutes ago Up 59 minutes (healthy) kolla-ansible 2025-05-13 20:29:27.171978 | orchestrator | 201e6071fee6 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 59 minutes ago Up 59 minutes (healthy) 8000/tcp manager-ara-server-1 2025-05-13 20:29:27.171986 | orchestrator | 4bd873d77c46 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 59 minutes ago Up 59 minutes (healthy) manager-beat-1 2025-05-13 20:29:27.171994 | orchestrator | d3efa0f00457 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 59 minutes ago Up 59 minutes (healthy) manager-conductor-1 2025-05-13 20:29:27.172002 | orchestrator | 6c2788bafe40 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 59 minutes ago Up 59 minutes (healthy) manager-netbox-1 2025-05-13 20:29:27.172010 | orchestrator | 8a7539e4144c registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" 59 minutes ago Up 59 minutes (healthy) 3306/tcp manager-mariadb-1 2025-05-13 20:29:27.172078 | orchestrator | 7f9f477b576c registry.osism.tech/dockerhub/library/redis:7.4.3-alpine "docker-entrypoint.s…" 59 minutes ago Up 59 minutes (healthy) 6379/tcp manager-redis-1 2025-05-13 20:29:27.172098 | orchestrator | 184ef6b6aa73 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 59 minutes ago Up 59 minutes (healthy) manager-flower-1 2025-05-13 20:29:27.172107 | orchestrator | 0ee480940b02 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 59 minutes ago Up 59 minutes (healthy) manager-listener-1 2025-05-13 20:29:27.172115 | orchestrator | ed3ca4f82a38 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 59 minutes ago Up 59 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-05-13 20:29:27.172123 | orchestrator | fe2b5e6b3506 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 59 minutes ago Up 59 minutes (healthy) osismclient 2025-05-13 20:29:27.172135 | orchestrator | 78e632de387c registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 59 minutes ago Up 59 minutes (healthy) manager-openstack-1 2025-05-13 20:29:27.172144 | orchestrator | 5b267e67306e registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 59 minutes ago Up 59 minutes (healthy) manager-watchdog-1 2025-05-13 20:29:27.172152 | orchestrator | 5e32f2414028 registry.osism.tech/osism/netbox:v4.2.2 "/opt/netbox/venv/bi…" About an hour ago Up About an hour (healthy) netbox-netbox-worker-1 2025-05-13 20:29:27.172161 | orchestrator | 017ed0dfaa99 registry.osism.tech/osism/netbox:v4.2.2 "/usr/bin/tini -- /o…" About an hour ago Up About an hour (healthy) netbox-netbox-1 2025-05-13 20:29:27.172169 | orchestrator | 2a855d241d93 registry.osism.tech/dockerhub/library/redis:7.4.3-alpine "docker-entrypoint.s…" About an hour ago Up About an hour (healthy) 6379/tcp netbox-redis-1 2025-05-13 20:29:27.172177 | orchestrator | 51074d91385b registry.osism.tech/dockerhub/library/postgres:16.9-alpine "docker-entrypoint.s…" About an hour ago Up About an hour (healthy) 5432/tcp netbox-postgres-1 2025-05-13 20:29:27.172185 | orchestrator | d324e78374b0 registry.osism.tech/dockerhub/library/traefik:v3.4.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-05-13 20:29:27.438637 | orchestrator | 2025-05-13 20:29:27.438755 | orchestrator | ## Images @ testbed-manager 2025-05-13 20:29:27.438770 | orchestrator | 2025-05-13 20:29:27.438782 | orchestrator | + echo 2025-05-13 20:29:27.438794 | orchestrator | + echo '## Images @ testbed-manager' 2025-05-13 20:29:27.438807 | orchestrator | + echo 2025-05-13 20:29:27.438845 | orchestrator | + osism container testbed-manager images 2025-05-13 20:29:29.528334 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-05-13 20:29:29.528436 | orchestrator | registry.osism.tech/osism/osism-ansible latest 64090569d5ca 43 minutes ago 555MB 2025-05-13 20:29:29.528446 | orchestrator | registry.osism.tech/osism/osism-ansible 5505c26edefe 4 hours ago 555MB 2025-05-13 20:29:29.528478 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 5c7511ea3d96 5 hours ago 536MB 2025-05-13 20:29:29.528486 | orchestrator | registry.osism.tech/osism/osism latest 28d53e9b74ae 5 hours ago 339MB 2025-05-13 20:29:29.528494 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 2da5f45db2a6 5 hours ago 311MB 2025-05-13 20:29:29.528501 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 9a70cdf28c76 6 hours ago 1.2GB 2025-05-13 20:29:29.528511 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 76112e377453 11 hours ago 572MB 2025-05-13 20:29:29.528518 | orchestrator | registry.osism.tech/osism/homer v25.05.1 6846e50da1be 17 hours ago 11MB 2025-05-13 20:29:29.528526 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 f22219aa982a 17 hours ago 225MB 2025-05-13 20:29:29.528533 | orchestrator | registry.osism.tech/osism/cephclient reef c21acc38590e 17 hours ago 453MB 2025-05-13 20:29:29.528540 | orchestrator | registry.osism.tech/dockerhub/library/postgres 16.9-alpine b56133b65cd3 5 days ago 275MB 2025-05-13 20:29:29.528548 | orchestrator | registry.osism.tech/kolla/cron 2024.2 1889be0eac08 6 days ago 318MB 2025-05-13 20:29:29.528555 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 58e55a1b66e3 6 days ago 746MB 2025-05-13 20:29:29.528562 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 5dd5c89951f8 6 days ago 626MB 2025-05-13 20:29:29.528570 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 3b8b9ff5984d 6 days ago 360MB 2025-05-13 20:29:29.528577 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 bf00029ac6b4 6 days ago 456MB 2025-05-13 20:29:29.528584 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 1b41fe8ac6d5 6 days ago 410MB 2025-05-13 20:29:29.528591 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 8dc226730d91 6 days ago 358MB 2025-05-13 20:29:29.528599 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 cb92564b44ae 6 days ago 891MB 2025-05-13 20:29:29.528618 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.0 79e66182ffbe 8 days ago 224MB 2025-05-13 20:29:29.528627 | orchestrator | registry.osism.tech/dockerhub/hashicorp/vault 1.19.3 272792d172e0 2 weeks ago 504MB 2025-05-13 20:29:29.528635 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.3-alpine 9a07b03a1871 2 weeks ago 41.4MB 2025-05-13 20:29:29.528642 | orchestrator | registry.osism.tech/osism/netbox v4.2.2 de0f89b61971 6 weeks ago 817MB 2025-05-13 20:29:29.528650 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.7.2 4815a3e162ea 2 months ago 328MB 2025-05-13 20:29:29.528657 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 3 months ago 571MB 2025-05-13 20:29:29.528665 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 8 months ago 300MB 2025-05-13 20:29:29.528672 | orchestrator | ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 11 months ago 146MB 2025-05-13 20:29:29.832180 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-05-13 20:29:29.833809 | orchestrator | ++ semver latest 5.0.0 2025-05-13 20:29:29.891176 | orchestrator | 2025-05-13 20:29:29.891289 | orchestrator | ## Containers @ testbed-node-0 2025-05-13 20:29:29.891305 | orchestrator | 2025-05-13 20:29:29.891317 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-13 20:29:29.891328 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-13 20:29:29.891340 | orchestrator | + echo 2025-05-13 20:29:29.891352 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-05-13 20:29:29.891391 | orchestrator | + echo 2025-05-13 20:29:29.891404 | orchestrator | + osism container testbed-node-0 ps 2025-05-13 20:29:32.075491 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-05-13 20:29:32.075598 | orchestrator | e64780f21cab registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-05-13 20:29:32.075613 | orchestrator | 45e51b0d0bf0 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-05-13 20:29:32.075619 | orchestrator | dcbb41948b1c registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-05-13 20:29:32.075625 | orchestrator | 0c416821fdea registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-05-13 20:29:32.075632 | orchestrator | 91d0c8742367 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2025-05-13 20:29:32.075638 | orchestrator | de4507896036 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-05-13 20:29:32.075645 | orchestrator | c2608dfa01ee registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-05-13 20:29:32.075651 | orchestrator | ce46c5a77535 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-05-13 20:29:32.075657 | orchestrator | 064f74fbdb7d registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-05-13 20:29:32.075675 | orchestrator | 5c62dd75ebbd registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-05-13 20:29:32.075681 | orchestrator | 05164faaa7cb registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-05-13 20:29:32.075702 | orchestrator | 45462bc55c93 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-05-13 20:29:32.075709 | orchestrator | 0aa92577ff30 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2025-05-13 20:29:32.075714 | orchestrator | bfdc9010e6af registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-05-13 20:29:32.075721 | orchestrator | e9710e5248f4 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-05-13 20:29:32.075727 | orchestrator | 7798b401971e registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_api 2025-05-13 20:29:32.075732 | orchestrator | 56fbf00cac09 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-05-13 20:29:32.075743 | orchestrator | 73a265a8e1dc registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-05-13 20:29:32.075750 | orchestrator | 5432b676367a registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-05-13 20:29:32.075774 | orchestrator | a38d52e3cb90 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-05-13 20:29:32.075781 | orchestrator | 2a0b3a66b337 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-05-13 20:29:32.075787 | orchestrator | 5def691a4c5d registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-05-13 20:29:32.075793 | orchestrator | a77fd30c995c registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-05-13 20:29:32.075799 | orchestrator | 499a408613db registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-05-13 20:29:32.075805 | orchestrator | 293beed221dd registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-05-13 20:29:32.075811 | orchestrator | 923b694b389a registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-05-13 20:29:32.075861 | orchestrator | 4dbf324f4b52 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-05-13 20:29:32.075866 | orchestrator | 52c2f2f3388e registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-05-13 20:29:32.075870 | orchestrator | 9950949d075e registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-05-13 20:29:32.075874 | orchestrator | 966148e0dbd1 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-05-13 20:29:32.075877 | orchestrator | 198f9d0505f1 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-05-13 20:29:32.076082 | orchestrator | 222e24e1ea33 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-0 2025-05-13 20:29:32.076088 | orchestrator | abdcc713491c registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 16 minutes (healthy) keystone 2025-05-13 20:29:32.076092 | orchestrator | 9583ad4e0dab registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-05-13 20:29:32.076096 | orchestrator | 2f25ec29c533 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-05-13 20:29:32.076105 | orchestrator | 8d2099d3a78a registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-05-13 20:29:32.076109 | orchestrator | 02083e97d28c registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2025-05-13 20:29:32.076113 | orchestrator | 96c19f71e26d registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-05-13 20:29:32.076123 | orchestrator | 59b98b262490 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-05-13 20:29:32.076127 | orchestrator | 62617caddf61 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-0 2025-05-13 20:29:32.076131 | orchestrator | 18433ab29c06 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-05-13 20:29:32.076134 | orchestrator | 01b0bb1834d9 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-05-13 20:29:32.076138 | orchestrator | 6789c52d7d60 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-05-13 20:29:32.076142 | orchestrator | c73fa9abf2a5 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-05-13 20:29:32.076146 | orchestrator | b1b1e22be91d registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-05-13 20:29:32.076150 | orchestrator | 00e25b5c90e1 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-05-13 20:29:32.076153 | orchestrator | bc50caa5eb97 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_controller 2025-05-13 20:29:32.076157 | orchestrator | 5d274135c3ee registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-0 2025-05-13 20:29:32.076161 | orchestrator | a9117e32dc94 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-05-13 20:29:32.076165 | orchestrator | f8f7f6856b89 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-05-13 20:29:32.076169 | orchestrator | dd751850312c registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-05-13 20:29:32.076173 | orchestrator | 154e05dd2d26 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-05-13 20:29:32.076177 | orchestrator | 104947bce3f7 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-05-13 20:29:32.076180 | orchestrator | c133e1dadde5 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-05-13 20:29:32.076184 | orchestrator | 889ac35cc7d7 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-05-13 20:29:32.076190 | orchestrator | 9098f84547fb registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 30 minutes kolla_toolbox 2025-05-13 20:29:32.076194 | orchestrator | 92ac1d4eea27 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-05-13 20:29:32.358451 | orchestrator | 2025-05-13 20:29:32.358526 | orchestrator | ## Images @ testbed-node-0 2025-05-13 20:29:32.358532 | orchestrator | 2025-05-13 20:29:32.358555 | orchestrator | + echo 2025-05-13 20:29:32.358560 | orchestrator | + echo '## Images @ testbed-node-0' 2025-05-13 20:29:32.358566 | orchestrator | + echo 2025-05-13 20:29:32.358571 | orchestrator | + osism container testbed-node-0 images 2025-05-13 20:29:34.521999 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-05-13 20:29:34.522132 | orchestrator | registry.osism.tech/osism/ceph-daemon reef a6eecfeabe79 17 hours ago 1.27GB 2025-05-13 20:29:34.522140 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 04fc7376c64c 6 days ago 375MB 2025-05-13 20:29:34.522145 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 67fa0a55bc5e 6 days ago 1.59GB 2025-05-13 20:29:34.522150 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 f2651c58df80 6 days ago 1.55GB 2025-05-13 20:29:34.522154 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 4cdd10b90f5a 6 days ago 1.01GB 2025-05-13 20:29:34.522158 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 50d58f1f6e4e 6 days ago 326MB 2025-05-13 20:29:34.522162 | orchestrator | registry.osism.tech/kolla/cron 2024.2 1889be0eac08 6 days ago 318MB 2025-05-13 20:29:34.522167 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 ae7fe18eaf3e 6 days ago 329MB 2025-05-13 20:29:34.522171 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 2541622ae785 6 days ago 417MB 2025-05-13 20:29:34.522175 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 58e55a1b66e3 6 days ago 746MB 2025-05-13 20:29:34.522179 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 c143bd7f4121 6 days ago 318MB 2025-05-13 20:29:34.522183 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 5dd5c89951f8 6 days ago 626MB 2025-05-13 20:29:34.522187 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 340739858985 6 days ago 590MB 2025-05-13 20:29:34.522191 | orchestrator | registry.osism.tech/kolla/redis 2024.2 00384dafd051 6 days ago 324MB 2025-05-13 20:29:34.522203 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 45c0ed11fefe 6 days ago 324MB 2025-05-13 20:29:34.522207 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 3e2b688ee000 6 days ago 361MB 2025-05-13 20:29:34.522211 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 62d56b6fac4e 6 days ago 361MB 2025-05-13 20:29:34.522216 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 d7167bf51937 6 days ago 344MB 2025-05-13 20:29:34.522220 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 4972b33b6697 6 days ago 351MB 2025-05-13 20:29:34.522224 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 1b41fe8ac6d5 6 days ago 410MB 2025-05-13 20:29:34.522228 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 8dc226730d91 6 days ago 358MB 2025-05-13 20:29:34.522232 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 74d6e103330c 6 days ago 353MB 2025-05-13 20:29:34.522237 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 67d8a8d94f28 6 days ago 1.04GB 2025-05-13 20:29:34.522241 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 954d23827c32 6 days ago 1.04GB 2025-05-13 20:29:34.522245 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 12ca4ba36866 6 days ago 1.04GB 2025-05-13 20:29:34.522249 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 d5dd5b6fe0a1 6 days ago 1.04GB 2025-05-13 20:29:34.522253 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 51ef1cabd60d 6 days ago 1.04GB 2025-05-13 20:29:34.522257 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 50b1dc1a5592 6 days ago 1.04GB 2025-05-13 20:29:34.522276 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 268d65c18d83 6 days ago 1.13GB 2025-05-13 20:29:34.522281 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 a550ee2c1fb2 6 days ago 1.11GB 2025-05-13 20:29:34.522285 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 271202743813 6 days ago 1.11GB 2025-05-13 20:29:34.522289 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 1cbf127747d4 6 days ago 1.15GB 2025-05-13 20:29:34.522293 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 ad86766891c6 6 days ago 1.06GB 2025-05-13 20:29:34.522297 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 da249321181d 6 days ago 1.06GB 2025-05-13 20:29:34.522301 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 f8c92b9f65e4 6 days ago 1.06GB 2025-05-13 20:29:34.522305 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 227c0b84f8a2 6 days ago 1.41GB 2025-05-13 20:29:34.522320 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 8635e59a338d 6 days ago 1.41GB 2025-05-13 20:29:34.522324 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 6e8318f9146d 6 days ago 1.04GB 2025-05-13 20:29:34.522329 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 d112d35cb4cc 6 days ago 1.05GB 2025-05-13 20:29:34.522333 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 12eb62b255c1 6 days ago 1.05GB 2025-05-13 20:29:34.522337 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 d5ed39be7469 6 days ago 1.06GB 2025-05-13 20:29:34.522348 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 a636aa737c69 6 days ago 1.05GB 2025-05-13 20:29:34.522352 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 1b750e4a57a6 6 days ago 1.05GB 2025-05-13 20:29:34.522356 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 e643924bd3df 6 days ago 1.06GB 2025-05-13 20:29:34.522370 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 627530339ea2 6 days ago 1.42GB 2025-05-13 20:29:34.522375 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 1693a9681618 6 days ago 1.29GB 2025-05-13 20:29:34.522379 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 13f6d887f84c 6 days ago 1.29GB 2025-05-13 20:29:34.522383 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 229d0afc6727 6 days ago 1.29GB 2025-05-13 20:29:34.522387 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 41f5975572eb 6 days ago 1.11GB 2025-05-13 20:29:34.522391 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 ac5f63def63f 6 days ago 1.11GB 2025-05-13 20:29:34.522395 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 85d71337ad49 6 days ago 1.1GB 2025-05-13 20:29:34.522399 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 90c7cfd6b9f1 6 days ago 1.12GB 2025-05-13 20:29:34.522403 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 a050c19ba280 6 days ago 1.1GB 2025-05-13 20:29:34.522407 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 eea4b2b0f79c 6 days ago 1.1GB 2025-05-13 20:29:34.522412 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 67f9c52616ca 6 days ago 1.12GB 2025-05-13 20:29:34.522416 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 93b5d082cb86 6 days ago 1.31GB 2025-05-13 20:29:34.522420 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 93300b4fa890 6 days ago 1.19GB 2025-05-13 20:29:34.522427 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 9125e5efb56e 6 days ago 947MB 2025-05-13 20:29:34.522432 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 9cb6a4feaa4c 6 days ago 946MB 2025-05-13 20:29:34.522436 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 ca70d4f12a66 6 days ago 947MB 2025-05-13 20:29:34.522440 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 ca1be25de8b6 6 days ago 946MB 2025-05-13 20:29:34.522444 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 60f89630a675 7 days ago 1.21GB 2025-05-13 20:29:34.522448 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 91a2a6c5d8a0 7 days ago 1.24GB 2025-05-13 20:29:34.809802 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-05-13 20:29:34.810410 | orchestrator | ++ semver latest 5.0.0 2025-05-13 20:29:34.853596 | orchestrator | 2025-05-13 20:29:34.853715 | orchestrator | ## Containers @ testbed-node-1 2025-05-13 20:29:34.853737 | orchestrator | 2025-05-13 20:29:34.853753 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-13 20:29:34.853770 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-13 20:29:34.853787 | orchestrator | + echo 2025-05-13 20:29:34.853803 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-05-13 20:29:34.853891 | orchestrator | + echo 2025-05-13 20:29:34.853908 | orchestrator | + osism container testbed-node-1 ps 2025-05-13 20:29:37.067962 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-05-13 20:29:37.068062 | orchestrator | c6ea5271b952 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-05-13 20:29:37.068075 | orchestrator | 3ff54dda25e9 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-05-13 20:29:37.068085 | orchestrator | 77473a470ac0 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-05-13 20:29:37.068094 | orchestrator | ed45b1964a85 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-05-13 20:29:37.068107 | orchestrator | c358f686faa6 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2025-05-13 20:29:37.068116 | orchestrator | 2f4d8dbe6544 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-05-13 20:29:37.068135 | orchestrator | bdd87369f435 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-05-13 20:29:37.068144 | orchestrator | 156f982a5304 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-05-13 20:29:37.068153 | orchestrator | e99fabcd599f registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-05-13 20:29:37.068163 | orchestrator | 41e0617c2abc registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-05-13 20:29:37.068171 | orchestrator | 41ccc88b9101 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-05-13 20:29:37.068180 | orchestrator | 29cb23458a86 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-05-13 20:29:37.068210 | orchestrator | c20db147e8d7 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2025-05-13 20:29:37.068220 | orchestrator | 6ffa1e2f682e registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-05-13 20:29:37.068229 | orchestrator | 451a27492488 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-05-13 20:29:37.068237 | orchestrator | abf1b7f87770 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) designate_api 2025-05-13 20:29:37.068246 | orchestrator | a27b212bcc7e registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-05-13 20:29:37.068254 | orchestrator | 20d6dc441773 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-05-13 20:29:37.068263 | orchestrator | 68e2b73fcb64 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-05-13 20:29:37.068271 | orchestrator | cb13e86df5ef registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-05-13 20:29:37.068280 | orchestrator | ea2af85b185f registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-05-13 20:29:37.068302 | orchestrator | ffd3081c9083 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-05-13 20:29:37.068312 | orchestrator | 53e098c730d7 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-05-13 20:29:37.068321 | orchestrator | 49da00e16e60 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-05-13 20:29:37.068329 | orchestrator | de7a2f1e757f registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-05-13 20:29:37.068338 | orchestrator | e3a4fe6f547b registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2025-05-13 20:29:37.068347 | orchestrator | 7ac67c76aa96 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-05-13 20:29:37.068356 | orchestrator | c51011d5b318 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-05-13 20:29:37.068365 | orchestrator | 55ed9b1c8a17 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-05-13 20:29:37.068378 | orchestrator | 6b16a63afc70 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-05-13 20:29:37.068387 | orchestrator | 4f5ac964fd5a registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-05-13 20:29:37.068395 | orchestrator | 9caf794fb3fe registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2025-05-13 20:29:37.068411 | orchestrator | 245a70caf7fc registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-05-13 20:29:37.068422 | orchestrator | 902778d6ae46 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-05-13 20:29:37.068432 | orchestrator | 142ba86b3b4c registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-05-13 20:29:37.068442 | orchestrator | a346d0d18cb0 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-05-13 20:29:37.068452 | orchestrator | 047f32088be6 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-05-13 20:29:37.068463 | orchestrator | e5d87d43babb registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-05-13 20:29:37.068473 | orchestrator | 2b12bd11af85 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 21 minutes (healthy) opensearch 2025-05-13 20:29:37.068482 | orchestrator | ba587aad1a81 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-1 2025-05-13 20:29:37.068492 | orchestrator | ce91cf1c4c62 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-05-13 20:29:37.068502 | orchestrator | c7a89d75b077 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-05-13 20:29:37.068512 | orchestrator | 7c0ba66d82d3 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-05-13 20:29:37.068522 | orchestrator | 4f132c4cf20d registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-05-13 20:29:37.068539 | orchestrator | dc557bad29a1 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-05-13 20:29:37.068549 | orchestrator | 35957c5518c9 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_nb_db 2025-05-13 20:29:37.068559 | orchestrator | 6aba103037af registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-05-13 20:29:37.068569 | orchestrator | e6f1703fffb7 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-05-13 20:29:37.068582 | orchestrator | b11b6c582e68 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2025-05-13 20:29:37.068598 | orchestrator | 187e1c5ac646 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-05-13 20:29:37.068615 | orchestrator | 6c92bbc4366e registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-05-13 20:29:37.068650 | orchestrator | 06021844129a registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-05-13 20:29:37.068665 | orchestrator | 6201dcefdcd7 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-05-13 20:29:37.068679 | orchestrator | 8606028afcfa registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-05-13 20:29:37.068694 | orchestrator | 5f6b0d56cfa2 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-05-13 20:29:37.068715 | orchestrator | bbd07c6aafcf registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-05-13 20:29:37.068729 | orchestrator | c08220e70269 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-05-13 20:29:37.383625 | orchestrator | 2025-05-13 20:29:37.383758 | orchestrator | ## Images @ testbed-node-1 2025-05-13 20:29:37.383783 | orchestrator | 2025-05-13 20:29:37.383803 | orchestrator | + echo 2025-05-13 20:29:37.383873 | orchestrator | + echo '## Images @ testbed-node-1' 2025-05-13 20:29:37.383895 | orchestrator | + echo 2025-05-13 20:29:37.383912 | orchestrator | + osism container testbed-node-1 images 2025-05-13 20:29:39.569856 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-05-13 20:29:39.569982 | orchestrator | registry.osism.tech/osism/ceph-daemon reef a6eecfeabe79 17 hours ago 1.27GB 2025-05-13 20:29:39.569997 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 04fc7376c64c 6 days ago 375MB 2025-05-13 20:29:39.570009 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 67fa0a55bc5e 6 days ago 1.59GB 2025-05-13 20:29:39.570096 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 f2651c58df80 6 days ago 1.55GB 2025-05-13 20:29:39.570109 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 4cdd10b90f5a 6 days ago 1.01GB 2025-05-13 20:29:39.570120 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 50d58f1f6e4e 6 days ago 326MB 2025-05-13 20:29:39.570131 | orchestrator | registry.osism.tech/kolla/cron 2024.2 1889be0eac08 6 days ago 318MB 2025-05-13 20:29:39.570142 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 ae7fe18eaf3e 6 days ago 329MB 2025-05-13 20:29:39.570153 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 2541622ae785 6 days ago 417MB 2025-05-13 20:29:39.570164 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 c143bd7f4121 6 days ago 318MB 2025-05-13 20:29:39.570175 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 58e55a1b66e3 6 days ago 746MB 2025-05-13 20:29:39.570186 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 5dd5c89951f8 6 days ago 626MB 2025-05-13 20:29:39.570197 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 340739858985 6 days ago 590MB 2025-05-13 20:29:39.570208 | orchestrator | registry.osism.tech/kolla/redis 2024.2 00384dafd051 6 days ago 324MB 2025-05-13 20:29:39.570219 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 45c0ed11fefe 6 days ago 324MB 2025-05-13 20:29:39.570230 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 3e2b688ee000 6 days ago 361MB 2025-05-13 20:29:39.570241 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 62d56b6fac4e 6 days ago 361MB 2025-05-13 20:29:39.570252 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 d7167bf51937 6 days ago 344MB 2025-05-13 20:29:39.570286 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 4972b33b6697 6 days ago 351MB 2025-05-13 20:29:39.570297 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 1b41fe8ac6d5 6 days ago 410MB 2025-05-13 20:29:39.570308 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 8dc226730d91 6 days ago 358MB 2025-05-13 20:29:39.570318 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 74d6e103330c 6 days ago 353MB 2025-05-13 20:29:39.570330 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 268d65c18d83 6 days ago 1.13GB 2025-05-13 20:29:39.570340 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 a550ee2c1fb2 6 days ago 1.11GB 2025-05-13 20:29:39.570351 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 271202743813 6 days ago 1.11GB 2025-05-13 20:29:39.570362 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 1cbf127747d4 6 days ago 1.15GB 2025-05-13 20:29:39.570373 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 ad86766891c6 6 days ago 1.06GB 2025-05-13 20:29:39.570384 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 da249321181d 6 days ago 1.06GB 2025-05-13 20:29:39.570396 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 f8c92b9f65e4 6 days ago 1.06GB 2025-05-13 20:29:39.570407 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 227c0b84f8a2 6 days ago 1.41GB 2025-05-13 20:29:39.570418 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 8635e59a338d 6 days ago 1.41GB 2025-05-13 20:29:39.570429 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 6e8318f9146d 6 days ago 1.04GB 2025-05-13 20:29:39.570440 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 d112d35cb4cc 6 days ago 1.05GB 2025-05-13 20:29:39.570451 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 12eb62b255c1 6 days ago 1.05GB 2025-05-13 20:29:39.570461 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 d5ed39be7469 6 days ago 1.06GB 2025-05-13 20:29:39.570472 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 a636aa737c69 6 days ago 1.05GB 2025-05-13 20:29:39.570501 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 1b750e4a57a6 6 days ago 1.05GB 2025-05-13 20:29:39.570513 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 e643924bd3df 6 days ago 1.06GB 2025-05-13 20:29:39.570524 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 627530339ea2 6 days ago 1.42GB 2025-05-13 20:29:39.570535 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 1693a9681618 6 days ago 1.29GB 2025-05-13 20:29:39.570546 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 13f6d887f84c 6 days ago 1.29GB 2025-05-13 20:29:39.570557 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 229d0afc6727 6 days ago 1.29GB 2025-05-13 20:29:39.570568 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 85d71337ad49 6 days ago 1.1GB 2025-05-13 20:29:39.570587 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 90c7cfd6b9f1 6 days ago 1.12GB 2025-05-13 20:29:39.570598 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 a050c19ba280 6 days ago 1.1GB 2025-05-13 20:29:39.570609 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 eea4b2b0f79c 6 days ago 1.1GB 2025-05-13 20:29:39.570619 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 67f9c52616ca 6 days ago 1.12GB 2025-05-13 20:29:39.570630 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 93b5d082cb86 6 days ago 1.31GB 2025-05-13 20:29:39.570657 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 93300b4fa890 6 days ago 1.19GB 2025-05-13 20:29:39.570668 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 9125e5efb56e 6 days ago 947MB 2025-05-13 20:29:39.570680 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 9cb6a4feaa4c 6 days ago 946MB 2025-05-13 20:29:39.570690 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 ca70d4f12a66 6 days ago 947MB 2025-05-13 20:29:39.570701 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 ca1be25de8b6 6 days ago 946MB 2025-05-13 20:29:39.570712 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 60f89630a675 7 days ago 1.21GB 2025-05-13 20:29:39.570723 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 91a2a6c5d8a0 7 days ago 1.24GB 2025-05-13 20:29:39.850731 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-05-13 20:29:39.851462 | orchestrator | ++ semver latest 5.0.0 2025-05-13 20:29:39.916229 | orchestrator | 2025-05-13 20:29:39.916335 | orchestrator | ## Containers @ testbed-node-2 2025-05-13 20:29:39.916351 | orchestrator | 2025-05-13 20:29:39.916363 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-13 20:29:39.916374 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-13 20:29:39.916385 | orchestrator | + echo 2025-05-13 20:29:39.916397 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-05-13 20:29:39.916409 | orchestrator | + echo 2025-05-13 20:29:39.916420 | orchestrator | + osism container testbed-node-2 ps 2025-05-13 20:29:41.949662 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-05-13 20:29:41.949775 | orchestrator | c442d56917ee registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-05-13 20:29:41.949792 | orchestrator | 28ef1271f6a7 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-05-13 20:29:41.949804 | orchestrator | f47dce7bc76f registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-05-13 20:29:41.949845 | orchestrator | b3159c68e4a5 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-05-13 20:29:41.949859 | orchestrator | 0809f785e7ca registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 4 minutes (healthy) octavia_api 2025-05-13 20:29:41.949870 | orchestrator | cf7f09bf755b registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-05-13 20:29:41.949881 | orchestrator | 2cc6f86a99ab registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-05-13 20:29:41.949891 | orchestrator | 388533ccb7dc registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-05-13 20:29:41.949902 | orchestrator | d9437343a087 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-05-13 20:29:41.949913 | orchestrator | 2014968eec24 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-05-13 20:29:41.949923 | orchestrator | ff5451e7e0f7 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-05-13 20:29:41.949959 | orchestrator | 3783a668c7b1 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-05-13 20:29:41.949971 | orchestrator | 83793fd17941 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2025-05-13 20:29:41.949982 | orchestrator | d3f99883e9b4 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-05-13 20:29:41.949993 | orchestrator | 77f67bcd3c84 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-05-13 20:29:41.950004 | orchestrator | dfcac80bbf90 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-05-13 20:29:41.950066 | orchestrator | 4ef10ab49e7f registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-05-13 20:29:41.950081 | orchestrator | a38b5289b745 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-05-13 20:29:41.950092 | orchestrator | 9dcd9f94a0eb registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-05-13 20:29:41.950103 | orchestrator | 458807017d81 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-05-13 20:29:41.950114 | orchestrator | 78b96f8dcaa6 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-05-13 20:29:41.950144 | orchestrator | d670a1909fb5 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-05-13 20:29:41.950156 | orchestrator | 80f5985b6128 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-05-13 20:29:41.950183 | orchestrator | be1b5d95b876 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-05-13 20:29:41.950194 | orchestrator | 8408066d3c74 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-05-13 20:29:41.950205 | orchestrator | bc0652933854 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2025-05-13 20:29:41.950216 | orchestrator | 97edfc81ae1f registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-05-13 20:29:41.950227 | orchestrator | a848c3ee9d4f registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-05-13 20:29:41.950238 | orchestrator | 61034268fb16 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-05-13 20:29:41.950249 | orchestrator | 9c65a1b9278c registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-05-13 20:29:41.950260 | orchestrator | 37f6a8fcd702 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-05-13 20:29:41.950280 | orchestrator | db04e9cb8349 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-2 2025-05-13 20:29:41.950291 | orchestrator | c0345058d654 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-05-13 20:29:41.950302 | orchestrator | cbd2b12e036d registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-05-13 20:29:41.950317 | orchestrator | 817e3d84ac68 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-05-13 20:29:41.950328 | orchestrator | d8a3d2a18a40 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-05-13 20:29:41.950338 | orchestrator | 09d2b01c69bb registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-05-13 20:29:41.950349 | orchestrator | 1372a0b74d2c registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-05-13 20:29:41.950360 | orchestrator | 7b468d82eb79 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-05-13 20:29:41.950370 | orchestrator | 8ef1d27fa8ad registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-2 2025-05-13 20:29:41.950381 | orchestrator | a4344c3bdd72 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-05-13 20:29:41.950391 | orchestrator | dddb88a2b3e4 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 23 minutes (healthy) proxysql 2025-05-13 20:29:41.950402 | orchestrator | 4bf437294815 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-05-13 20:29:41.950419 | orchestrator | 5cf1404012d9 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-05-13 20:29:41.950448 | orchestrator | e6bdce427087 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_sb_db 2025-05-13 20:29:41.950467 | orchestrator | 56ccb08bdb5a registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_nb_db 2025-05-13 20:29:41.950487 | orchestrator | cb665e49cc96 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-05-13 20:29:41.950506 | orchestrator | c012d805e565 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-05-13 20:29:41.950524 | orchestrator | d4982db07972 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-2 2025-05-13 20:29:41.950544 | orchestrator | 90d4fc65d5d6 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-05-13 20:29:41.950577 | orchestrator | e624403352f8 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-05-13 20:29:41.950593 | orchestrator | 002c42ad8db6 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-05-13 20:29:41.950603 | orchestrator | e1af0b37caa2 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-05-13 20:29:41.950614 | orchestrator | fdc4e3e50685 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-05-13 20:29:41.950624 | orchestrator | 6eaf851a57e8 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-05-13 20:29:41.950635 | orchestrator | a26f470e18ad registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-05-13 20:29:41.950646 | orchestrator | f7cc7d5745c3 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-05-13 20:29:42.134461 | orchestrator | 2025-05-13 20:29:42.134582 | orchestrator | ## Images @ testbed-node-2 2025-05-13 20:29:42.134615 | orchestrator | 2025-05-13 20:29:42.134634 | orchestrator | + echo 2025-05-13 20:29:42.134652 | orchestrator | + echo '## Images @ testbed-node-2' 2025-05-13 20:29:42.134671 | orchestrator | + echo 2025-05-13 20:29:42.134690 | orchestrator | + osism container testbed-node-2 images 2025-05-13 20:29:44.145079 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-05-13 20:29:44.145170 | orchestrator | registry.osism.tech/osism/ceph-daemon reef a6eecfeabe79 17 hours ago 1.27GB 2025-05-13 20:29:44.145180 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 04fc7376c64c 6 days ago 375MB 2025-05-13 20:29:44.145187 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 67fa0a55bc5e 6 days ago 1.59GB 2025-05-13 20:29:44.145211 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 f2651c58df80 6 days ago 1.55GB 2025-05-13 20:29:44.145218 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 4cdd10b90f5a 6 days ago 1.01GB 2025-05-13 20:29:44.145225 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 50d58f1f6e4e 6 days ago 326MB 2025-05-13 20:29:44.145232 | orchestrator | registry.osism.tech/kolla/cron 2024.2 1889be0eac08 6 days ago 318MB 2025-05-13 20:29:44.145239 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 ae7fe18eaf3e 6 days ago 329MB 2025-05-13 20:29:44.145245 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 2541622ae785 6 days ago 417MB 2025-05-13 20:29:44.145252 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 58e55a1b66e3 6 days ago 746MB 2025-05-13 20:29:44.145259 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 c143bd7f4121 6 days ago 318MB 2025-05-13 20:29:44.145266 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 5dd5c89951f8 6 days ago 626MB 2025-05-13 20:29:44.145272 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 340739858985 6 days ago 590MB 2025-05-13 20:29:44.145279 | orchestrator | registry.osism.tech/kolla/redis 2024.2 00384dafd051 6 days ago 324MB 2025-05-13 20:29:44.145286 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 45c0ed11fefe 6 days ago 324MB 2025-05-13 20:29:44.145292 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 3e2b688ee000 6 days ago 361MB 2025-05-13 20:29:44.145299 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 62d56b6fac4e 6 days ago 361MB 2025-05-13 20:29:44.145321 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 d7167bf51937 6 days ago 344MB 2025-05-13 20:29:44.145328 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 4972b33b6697 6 days ago 351MB 2025-05-13 20:29:44.145335 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 1b41fe8ac6d5 6 days ago 410MB 2025-05-13 20:29:44.145342 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 8dc226730d91 6 days ago 358MB 2025-05-13 20:29:44.145349 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 74d6e103330c 6 days ago 353MB 2025-05-13 20:29:44.145362 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 268d65c18d83 6 days ago 1.13GB 2025-05-13 20:29:44.145374 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 a550ee2c1fb2 6 days ago 1.11GB 2025-05-13 20:29:44.145386 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 271202743813 6 days ago 1.11GB 2025-05-13 20:29:44.145398 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 1cbf127747d4 6 days ago 1.15GB 2025-05-13 20:29:44.145410 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 ad86766891c6 6 days ago 1.06GB 2025-05-13 20:29:44.145422 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 da249321181d 6 days ago 1.06GB 2025-05-13 20:29:44.145435 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 f8c92b9f65e4 6 days ago 1.06GB 2025-05-13 20:29:44.145446 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 227c0b84f8a2 6 days ago 1.41GB 2025-05-13 20:29:44.145459 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 8635e59a338d 6 days ago 1.41GB 2025-05-13 20:29:44.145471 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 6e8318f9146d 6 days ago 1.04GB 2025-05-13 20:29:44.145481 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 d112d35cb4cc 6 days ago 1.05GB 2025-05-13 20:29:44.145488 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 12eb62b255c1 6 days ago 1.05GB 2025-05-13 20:29:44.145495 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 d5ed39be7469 6 days ago 1.06GB 2025-05-13 20:29:44.145501 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 a636aa737c69 6 days ago 1.05GB 2025-05-13 20:29:44.145524 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 1b750e4a57a6 6 days ago 1.05GB 2025-05-13 20:29:44.145531 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 e643924bd3df 6 days ago 1.06GB 2025-05-13 20:29:44.145538 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 627530339ea2 6 days ago 1.42GB 2025-05-13 20:29:44.145544 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 1693a9681618 6 days ago 1.29GB 2025-05-13 20:29:44.145551 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 13f6d887f84c 6 days ago 1.29GB 2025-05-13 20:29:44.145557 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 229d0afc6727 6 days ago 1.29GB 2025-05-13 20:29:44.145564 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 85d71337ad49 6 days ago 1.1GB 2025-05-13 20:29:44.145570 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 90c7cfd6b9f1 6 days ago 1.12GB 2025-05-13 20:29:44.145576 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 a050c19ba280 6 days ago 1.1GB 2025-05-13 20:29:44.145583 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 eea4b2b0f79c 6 days ago 1.1GB 2025-05-13 20:29:44.145597 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 67f9c52616ca 6 days ago 1.12GB 2025-05-13 20:29:44.145604 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 93b5d082cb86 6 days ago 1.31GB 2025-05-13 20:29:44.145610 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 93300b4fa890 6 days ago 1.19GB 2025-05-13 20:29:44.145617 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 9125e5efb56e 6 days ago 947MB 2025-05-13 20:29:44.145625 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 9cb6a4feaa4c 6 days ago 946MB 2025-05-13 20:29:44.145632 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 ca70d4f12a66 6 days ago 947MB 2025-05-13 20:29:44.145639 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 ca1be25de8b6 6 days ago 946MB 2025-05-13 20:29:44.145647 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 60f89630a675 7 days ago 1.21GB 2025-05-13 20:29:44.145654 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 91a2a6c5d8a0 7 days ago 1.24GB 2025-05-13 20:29:44.331376 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-05-13 20:29:44.335892 | orchestrator | + set -e 2025-05-13 20:29:44.335964 | orchestrator | + source /opt/manager-vars.sh 2025-05-13 20:29:44.335974 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-13 20:29:44.335981 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-13 20:29:44.335988 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-13 20:29:44.335995 | orchestrator | ++ CEPH_VERSION=reef 2025-05-13 20:29:44.336006 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-13 20:29:44.336014 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-13 20:29:44.336022 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-13 20:29:44.336029 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-13 20:29:44.336036 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-13 20:29:44.336043 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-13 20:29:44.336050 | orchestrator | ++ export ARA=false 2025-05-13 20:29:44.336056 | orchestrator | ++ ARA=false 2025-05-13 20:29:44.336063 | orchestrator | ++ export TEMPEST=false 2025-05-13 20:29:44.336070 | orchestrator | ++ TEMPEST=false 2025-05-13 20:29:44.336076 | orchestrator | ++ export IS_ZUUL=true 2025-05-13 20:29:44.336083 | orchestrator | ++ IS_ZUUL=true 2025-05-13 20:29:44.336090 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.173 2025-05-13 20:29:44.336096 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.173 2025-05-13 20:29:44.336103 | orchestrator | ++ export EXTERNAL_API=false 2025-05-13 20:29:44.336109 | orchestrator | ++ EXTERNAL_API=false 2025-05-13 20:29:44.336116 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-13 20:29:44.336122 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-13 20:29:44.336129 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-13 20:29:44.336135 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-13 20:29:44.336142 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-13 20:29:44.336148 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-13 20:29:44.336155 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-13 20:29:44.336162 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-05-13 20:29:44.343861 | orchestrator | + set -e 2025-05-13 20:29:44.343932 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-13 20:29:44.343941 | orchestrator | ++ export INTERACTIVE=false 2025-05-13 20:29:44.343949 | orchestrator | ++ INTERACTIVE=false 2025-05-13 20:29:44.343956 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-13 20:29:44.343963 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-13 20:29:44.343981 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-05-13 20:29:44.343996 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-05-13 20:29:44.354098 | orchestrator | 2025-05-13 20:29:44.354179 | orchestrator | # Ceph status 2025-05-13 20:29:44.354194 | orchestrator | 2025-05-13 20:29:44.354207 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-13 20:29:44.354219 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-13 20:29:44.354229 | orchestrator | + echo 2025-05-13 20:29:44.354236 | orchestrator | + echo '# Ceph status' 2025-05-13 20:29:44.354243 | orchestrator | + echo 2025-05-13 20:29:44.354250 | orchestrator | + ceph -s 2025-05-13 20:29:44.960447 | orchestrator | cluster: 2025-05-13 20:29:44.960601 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-05-13 20:29:44.960657 | orchestrator | health: HEALTH_OK 2025-05-13 20:29:44.960673 | orchestrator | 2025-05-13 20:29:44.960684 | orchestrator | services: 2025-05-13 20:29:44.960696 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 28m) 2025-05-13 20:29:44.960708 | orchestrator | mgr: testbed-node-1(active, since 16m), standbys: testbed-node-0, testbed-node-2 2025-05-13 20:29:44.960720 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-05-13 20:29:44.960731 | orchestrator | osd: 6 osds: 6 up (since 24m), 6 in (since 25m) 2025-05-13 20:29:44.960742 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-05-13 20:29:44.960753 | orchestrator | 2025-05-13 20:29:44.960764 | orchestrator | data: 2025-05-13 20:29:44.960775 | orchestrator | volumes: 1/1 healthy 2025-05-13 20:29:44.960786 | orchestrator | pools: 14 pools, 401 pgs 2025-05-13 20:29:44.960878 | orchestrator | objects: 522 objects, 2.2 GiB 2025-05-13 20:29:44.960908 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-05-13 20:29:44.960929 | orchestrator | pgs: 401 active+clean 2025-05-13 20:29:44.960946 | orchestrator | 2025-05-13 20:29:44.998217 | orchestrator | 2025-05-13 20:29:44.998331 | orchestrator | # Ceph versions 2025-05-13 20:29:44.998347 | orchestrator | 2025-05-13 20:29:44.998360 | orchestrator | + echo 2025-05-13 20:29:44.998371 | orchestrator | + echo '# Ceph versions' 2025-05-13 20:29:44.998383 | orchestrator | + echo 2025-05-13 20:29:44.998394 | orchestrator | + ceph versions 2025-05-13 20:29:45.545277 | orchestrator | { 2025-05-13 20:29:45.545365 | orchestrator | "mon": { 2025-05-13 20:29:45.545392 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-05-13 20:29:45.545401 | orchestrator | }, 2025-05-13 20:29:45.545408 | orchestrator | "mgr": { 2025-05-13 20:29:45.545416 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-05-13 20:29:45.545423 | orchestrator | }, 2025-05-13 20:29:45.545430 | orchestrator | "osd": { 2025-05-13 20:29:45.545437 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-05-13 20:29:45.545444 | orchestrator | }, 2025-05-13 20:29:45.545451 | orchestrator | "mds": { 2025-05-13 20:29:45.545458 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-05-13 20:29:45.545465 | orchestrator | }, 2025-05-13 20:29:45.545472 | orchestrator | "rgw": { 2025-05-13 20:29:45.545479 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-05-13 20:29:45.545486 | orchestrator | }, 2025-05-13 20:29:45.545493 | orchestrator | "overall": { 2025-05-13 20:29:45.545501 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-05-13 20:29:45.545508 | orchestrator | } 2025-05-13 20:29:45.545515 | orchestrator | } 2025-05-13 20:29:45.578347 | orchestrator | 2025-05-13 20:29:45.578448 | orchestrator | # Ceph OSD tree 2025-05-13 20:29:45.578463 | orchestrator | 2025-05-13 20:29:45.578474 | orchestrator | + echo 2025-05-13 20:29:45.578485 | orchestrator | + echo '# Ceph OSD tree' 2025-05-13 20:29:45.578497 | orchestrator | + echo 2025-05-13 20:29:45.578506 | orchestrator | + ceph osd df tree 2025-05-13 20:29:46.110574 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-05-13 20:29:46.110697 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-05-13 20:29:46.110712 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-05-13 20:29:46.110724 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 74 MiB 19 GiB 5.61 0.95 190 up osd.0 2025-05-13 20:29:46.110735 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.22 1.05 202 up osd.4 2025-05-13 20:29:46.110746 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-05-13 20:29:46.110757 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.0 GiB 955 MiB 1 KiB 70 MiB 19 GiB 5.01 0.85 209 up osd.1 2025-05-13 20:29:46.110767 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 6.82 1.15 181 up osd.3 2025-05-13 20:29:46.110805 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-05-13 20:29:46.110856 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.69 1.13 191 up osd.2 2025-05-13 20:29:46.110868 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.0 GiB 979 MiB 1 KiB 74 MiB 19 GiB 5.15 0.87 197 up osd.5 2025-05-13 20:29:46.110878 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-05-13 20:29:46.110890 | orchestrator | MIN/MAX VAR: 0.85/1.15 STDDEV: 0.71 2025-05-13 20:29:46.155048 | orchestrator | 2025-05-13 20:29:46.155150 | orchestrator | # Ceph monitor status 2025-05-13 20:29:46.155164 | orchestrator | 2025-05-13 20:29:46.155176 | orchestrator | + echo 2025-05-13 20:29:46.155188 | orchestrator | + echo '# Ceph monitor status' 2025-05-13 20:29:46.155199 | orchestrator | + echo 2025-05-13 20:29:46.155210 | orchestrator | + ceph mon stat 2025-05-13 20:29:46.710194 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-05-13 20:29:46.741084 | orchestrator | 2025-05-13 20:29:46.741221 | orchestrator | # Ceph quorum status 2025-05-13 20:29:46.741247 | orchestrator | 2025-05-13 20:29:46.741267 | orchestrator | + echo 2025-05-13 20:29:46.741287 | orchestrator | + echo '# Ceph quorum status' 2025-05-13 20:29:46.741306 | orchestrator | + echo 2025-05-13 20:29:46.741439 | orchestrator | + ceph quorum_status 2025-05-13 20:29:46.741981 | orchestrator | + jq 2025-05-13 20:29:47.345715 | orchestrator | { 2025-05-13 20:29:47.345904 | orchestrator | "election_epoch": 6, 2025-05-13 20:29:47.345926 | orchestrator | "quorum": [ 2025-05-13 20:29:47.345939 | orchestrator | 0, 2025-05-13 20:29:47.345951 | orchestrator | 1, 2025-05-13 20:29:47.345962 | orchestrator | 2 2025-05-13 20:29:47.345973 | orchestrator | ], 2025-05-13 20:29:47.345984 | orchestrator | "quorum_names": [ 2025-05-13 20:29:47.345995 | orchestrator | "testbed-node-0", 2025-05-13 20:29:47.346007 | orchestrator | "testbed-node-1", 2025-05-13 20:29:47.346076 | orchestrator | "testbed-node-2" 2025-05-13 20:29:47.346090 | orchestrator | ], 2025-05-13 20:29:47.346101 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-05-13 20:29:47.346114 | orchestrator | "quorum_age": 1685, 2025-05-13 20:29:47.346124 | orchestrator | "features": { 2025-05-13 20:29:47.346135 | orchestrator | "quorum_con": "4540138322906710015", 2025-05-13 20:29:47.346147 | orchestrator | "quorum_mon": [ 2025-05-13 20:29:47.346158 | orchestrator | "kraken", 2025-05-13 20:29:47.346168 | orchestrator | "luminous", 2025-05-13 20:29:47.346179 | orchestrator | "mimic", 2025-05-13 20:29:47.346190 | orchestrator | "osdmap-prune", 2025-05-13 20:29:47.346201 | orchestrator | "nautilus", 2025-05-13 20:29:47.346212 | orchestrator | "octopus", 2025-05-13 20:29:47.346222 | orchestrator | "pacific", 2025-05-13 20:29:47.346233 | orchestrator | "elector-pinging", 2025-05-13 20:29:47.346245 | orchestrator | "quincy", 2025-05-13 20:29:47.346257 | orchestrator | "reef" 2025-05-13 20:29:47.346269 | orchestrator | ] 2025-05-13 20:29:47.346281 | orchestrator | }, 2025-05-13 20:29:47.346293 | orchestrator | "monmap": { 2025-05-13 20:29:47.346305 | orchestrator | "epoch": 1, 2025-05-13 20:29:47.346317 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-05-13 20:29:47.346330 | orchestrator | "modified": "2025-05-13T20:01:19.772711Z", 2025-05-13 20:29:47.346342 | orchestrator | "created": "2025-05-13T20:01:19.772711Z", 2025-05-13 20:29:47.346354 | orchestrator | "min_mon_release": 18, 2025-05-13 20:29:47.346365 | orchestrator | "min_mon_release_name": "reef", 2025-05-13 20:29:47.346376 | orchestrator | "election_strategy": 1, 2025-05-13 20:29:47.346387 | orchestrator | "disallowed_leaders: ": "", 2025-05-13 20:29:47.346398 | orchestrator | "stretch_mode": false, 2025-05-13 20:29:47.346408 | orchestrator | "tiebreaker_mon": "", 2025-05-13 20:29:47.346419 | orchestrator | "removed_ranks: ": "", 2025-05-13 20:29:47.346430 | orchestrator | "features": { 2025-05-13 20:29:47.346440 | orchestrator | "persistent": [ 2025-05-13 20:29:47.346451 | orchestrator | "kraken", 2025-05-13 20:29:47.346461 | orchestrator | "luminous", 2025-05-13 20:29:47.346472 | orchestrator | "mimic", 2025-05-13 20:29:47.346482 | orchestrator | "osdmap-prune", 2025-05-13 20:29:47.346493 | orchestrator | "nautilus", 2025-05-13 20:29:47.346526 | orchestrator | "octopus", 2025-05-13 20:29:47.346538 | orchestrator | "pacific", 2025-05-13 20:29:47.346549 | orchestrator | "elector-pinging", 2025-05-13 20:29:47.346559 | orchestrator | "quincy", 2025-05-13 20:29:47.346570 | orchestrator | "reef" 2025-05-13 20:29:47.346581 | orchestrator | ], 2025-05-13 20:29:47.346592 | orchestrator | "optional": [] 2025-05-13 20:29:47.346604 | orchestrator | }, 2025-05-13 20:29:47.346614 | orchestrator | "mons": [ 2025-05-13 20:29:47.346625 | orchestrator | { 2025-05-13 20:29:47.346636 | orchestrator | "rank": 0, 2025-05-13 20:29:47.346647 | orchestrator | "name": "testbed-node-0", 2025-05-13 20:29:47.346658 | orchestrator | "public_addrs": { 2025-05-13 20:29:47.346668 | orchestrator | "addrvec": [ 2025-05-13 20:29:47.346680 | orchestrator | { 2025-05-13 20:29:47.346691 | orchestrator | "type": "v2", 2025-05-13 20:29:47.346702 | orchestrator | "addr": "192.168.16.10:3300", 2025-05-13 20:29:47.346713 | orchestrator | "nonce": 0 2025-05-13 20:29:47.346724 | orchestrator | }, 2025-05-13 20:29:47.346734 | orchestrator | { 2025-05-13 20:29:47.346745 | orchestrator | "type": "v1", 2025-05-13 20:29:47.346756 | orchestrator | "addr": "192.168.16.10:6789", 2025-05-13 20:29:47.346767 | orchestrator | "nonce": 0 2025-05-13 20:29:47.346778 | orchestrator | } 2025-05-13 20:29:47.346789 | orchestrator | ] 2025-05-13 20:29:47.346800 | orchestrator | }, 2025-05-13 20:29:47.346838 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-05-13 20:29:47.346851 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-05-13 20:29:47.346863 | orchestrator | "priority": 0, 2025-05-13 20:29:47.346873 | orchestrator | "weight": 0, 2025-05-13 20:29:47.346884 | orchestrator | "crush_location": "{}" 2025-05-13 20:29:47.346894 | orchestrator | }, 2025-05-13 20:29:47.346905 | orchestrator | { 2025-05-13 20:29:47.346915 | orchestrator | "rank": 1, 2025-05-13 20:29:47.346926 | orchestrator | "name": "testbed-node-1", 2025-05-13 20:29:47.346936 | orchestrator | "public_addrs": { 2025-05-13 20:29:47.346947 | orchestrator | "addrvec": [ 2025-05-13 20:29:47.346957 | orchestrator | { 2025-05-13 20:29:47.346968 | orchestrator | "type": "v2", 2025-05-13 20:29:47.346978 | orchestrator | "addr": "192.168.16.11:3300", 2025-05-13 20:29:47.346989 | orchestrator | "nonce": 0 2025-05-13 20:29:47.346999 | orchestrator | }, 2025-05-13 20:29:47.347010 | orchestrator | { 2025-05-13 20:29:47.347020 | orchestrator | "type": "v1", 2025-05-13 20:29:47.347031 | orchestrator | "addr": "192.168.16.11:6789", 2025-05-13 20:29:47.347041 | orchestrator | "nonce": 0 2025-05-13 20:29:47.347052 | orchestrator | } 2025-05-13 20:29:47.347062 | orchestrator | ] 2025-05-13 20:29:47.347073 | orchestrator | }, 2025-05-13 20:29:47.347083 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-05-13 20:29:47.347094 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-05-13 20:29:47.347104 | orchestrator | "priority": 0, 2025-05-13 20:29:47.347115 | orchestrator | "weight": 0, 2025-05-13 20:29:47.347125 | orchestrator | "crush_location": "{}" 2025-05-13 20:29:47.347136 | orchestrator | }, 2025-05-13 20:29:47.347146 | orchestrator | { 2025-05-13 20:29:47.347162 | orchestrator | "rank": 2, 2025-05-13 20:29:47.347181 | orchestrator | "name": "testbed-node-2", 2025-05-13 20:29:47.347199 | orchestrator | "public_addrs": { 2025-05-13 20:29:47.347217 | orchestrator | "addrvec": [ 2025-05-13 20:29:47.347234 | orchestrator | { 2025-05-13 20:29:47.347268 | orchestrator | "type": "v2", 2025-05-13 20:29:47.347300 | orchestrator | "addr": "192.168.16.12:3300", 2025-05-13 20:29:47.347319 | orchestrator | "nonce": 0 2025-05-13 20:29:47.347338 | orchestrator | }, 2025-05-13 20:29:47.347356 | orchestrator | { 2025-05-13 20:29:47.347375 | orchestrator | "type": "v1", 2025-05-13 20:29:47.347393 | orchestrator | "addr": "192.168.16.12:6789", 2025-05-13 20:29:47.347411 | orchestrator | "nonce": 0 2025-05-13 20:29:47.347426 | orchestrator | } 2025-05-13 20:29:47.347436 | orchestrator | ] 2025-05-13 20:29:47.347447 | orchestrator | }, 2025-05-13 20:29:47.347458 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-05-13 20:29:47.347468 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-05-13 20:29:47.347479 | orchestrator | "priority": 0, 2025-05-13 20:29:47.347490 | orchestrator | "weight": 0, 2025-05-13 20:29:47.347500 | orchestrator | "crush_location": "{}" 2025-05-13 20:29:47.347524 | orchestrator | } 2025-05-13 20:29:47.347570 | orchestrator | ] 2025-05-13 20:29:47.347582 | orchestrator | } 2025-05-13 20:29:47.347593 | orchestrator | } 2025-05-13 20:29:47.347604 | orchestrator | 2025-05-13 20:29:47.347615 | orchestrator | # Ceph free space status 2025-05-13 20:29:47.347626 | orchestrator | 2025-05-13 20:29:47.347637 | orchestrator | + echo 2025-05-13 20:29:47.347648 | orchestrator | + echo '# Ceph free space status' 2025-05-13 20:29:47.347659 | orchestrator | + echo 2025-05-13 20:29:47.347669 | orchestrator | + ceph df 2025-05-13 20:29:47.903136 | orchestrator | --- RAW STORAGE --- 2025-05-13 20:29:47.903243 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-05-13 20:29:47.903267 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-05-13 20:29:47.903289 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-05-13 20:29:47.903308 | orchestrator | 2025-05-13 20:29:47.903327 | orchestrator | --- POOLS --- 2025-05-13 20:29:47.903347 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-05-13 20:29:47.903370 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-05-13 20:29:47.903391 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-05-13 20:29:47.903411 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-05-13 20:29:47.903431 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-05-13 20:29:47.903452 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-05-13 20:29:47.903473 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-05-13 20:29:47.903494 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-05-13 20:29:47.903513 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-05-13 20:29:47.903531 | orchestrator | .rgw.root 9 32 2.6 KiB 6 48 KiB 0 53 GiB 2025-05-13 20:29:47.903543 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-05-13 20:29:47.903554 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-05-13 20:29:47.903565 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.92 35 GiB 2025-05-13 20:29:47.903576 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-05-13 20:29:47.903587 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-05-13 20:29:47.947453 | orchestrator | ++ semver latest 5.0.0 2025-05-13 20:29:47.997117 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-13 20:29:47.997211 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-13 20:29:47.997224 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-05-13 20:29:47.997235 | orchestrator | + osism apply facts 2025-05-13 20:29:49.745612 | orchestrator | 2025-05-13 20:29:49 | INFO  | Task dc907c72-7db8-4646-8d6c-3b1f2b8f14c7 (facts) was prepared for execution. 2025-05-13 20:29:49.745715 | orchestrator | 2025-05-13 20:29:49 | INFO  | It takes a moment until task dc907c72-7db8-4646-8d6c-3b1f2b8f14c7 (facts) has been started and output is visible here. 2025-05-13 20:29:55.898164 | orchestrator | 2025-05-13 20:29:55.899266 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-13 20:29:55.900733 | orchestrator | 2025-05-13 20:29:55.901718 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-13 20:29:55.902947 | orchestrator | Tuesday 13 May 2025 20:29:55 +0000 (0:00:01.814) 0:00:01.814 *********** 2025-05-13 20:29:56.647501 | orchestrator | ok: [testbed-manager] 2025-05-13 20:29:59.181031 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:29:59.182278 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:29:59.185130 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:29:59.188122 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:29:59.189772 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:29:59.192012 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:29:59.192072 | orchestrator | 2025-05-13 20:29:59.192113 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-13 20:29:59.192143 | orchestrator | Tuesday 13 May 2025 20:29:59 +0000 (0:00:03.284) 0:00:05.099 *********** 2025-05-13 20:29:59.389134 | orchestrator | skipping: [testbed-manager] 2025-05-13 20:29:59.557281 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:29:59.647178 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:29:59.738008 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:29:59.820465 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:30:01.806907 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:30:01.809256 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:30:01.812762 | orchestrator | 2025-05-13 20:30:01.812870 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-13 20:30:01.812889 | orchestrator | 2025-05-13 20:30:01.812903 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-13 20:30:01.814579 | orchestrator | Tuesday 13 May 2025 20:30:01 +0000 (0:00:02.631) 0:00:07.731 *********** 2025-05-13 20:30:09.111483 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:30:09.111607 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:30:09.114392 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:30:09.114874 | orchestrator | ok: [testbed-manager] 2025-05-13 20:30:09.116478 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:30:09.117286 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:30:09.117612 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:30:09.118072 | orchestrator | 2025-05-13 20:30:09.118336 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-13 20:30:09.118754 | orchestrator | 2025-05-13 20:30:09.119101 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-13 20:30:09.119536 | orchestrator | Tuesday 13 May 2025 20:30:09 +0000 (0:00:07.302) 0:00:15.033 *********** 2025-05-13 20:30:09.345621 | orchestrator | skipping: [testbed-manager] 2025-05-13 20:30:09.445730 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:30:09.544266 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:30:09.642506 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:30:09.763186 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:30:12.161902 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:30:12.162067 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:30:12.162086 | orchestrator | 2025-05-13 20:30:12.163108 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:30:12.163156 | orchestrator | 2025-05-13 20:30:12 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 20:30:12.163167 | orchestrator | 2025-05-13 20:30:12 | INFO  | Please wait and do not abort execution. 2025-05-13 20:30:12.165888 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 20:30:12.166641 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 20:30:12.168770 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 20:30:12.169634 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 20:30:12.171293 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 20:30:12.171670 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 20:30:12.172138 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 20:30:12.172471 | orchestrator | 2025-05-13 20:30:12.172967 | orchestrator | 2025-05-13 20:30:12.173244 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:30:12.173620 | orchestrator | Tuesday 13 May 2025 20:30:12 +0000 (0:00:03.052) 0:00:18.085 *********** 2025-05-13 20:30:12.174051 | orchestrator | =============================================================================== 2025-05-13 20:30:12.174304 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.30s 2025-05-13 20:30:12.174693 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 3.28s 2025-05-13 20:30:12.175099 | orchestrator | Gather facts for all hosts ---------------------------------------------- 3.05s 2025-05-13 20:30:12.175681 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 2.63s 2025-05-13 20:30:13.039368 | orchestrator | + osism validate ceph-mons 2025-05-13 20:31:11.422355 | orchestrator | 2025-05-13 20:31:11.422483 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-05-13 20:31:11.422500 | orchestrator | 2025-05-13 20:31:11.422513 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-05-13 20:31:11.422525 | orchestrator | Tuesday 13 May 2025 20:30:21 +0000 (0:00:01.699) 0:00:01.699 *********** 2025-05-13 20:31:11.422537 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-13 20:31:11.422548 | orchestrator | 2025-05-13 20:31:11.422559 | orchestrator | TASK [Create report output directory] ****************************************** 2025-05-13 20:31:11.422571 | orchestrator | Tuesday 13 May 2025 20:30:23 +0000 (0:00:02.210) 0:00:03.909 *********** 2025-05-13 20:31:11.422582 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-13 20:31:11.422593 | orchestrator | 2025-05-13 20:31:11.422604 | orchestrator | TASK [Define report vars] ****************************************************** 2025-05-13 20:31:11.422616 | orchestrator | Tuesday 13 May 2025 20:30:25 +0000 (0:00:01.738) 0:00:05.648 *********** 2025-05-13 20:31:11.422627 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:31:11.422639 | orchestrator | 2025-05-13 20:31:11.422650 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-05-13 20:31:11.422678 | orchestrator | Tuesday 13 May 2025 20:30:26 +0000 (0:00:01.016) 0:00:06.665 *********** 2025-05-13 20:31:11.422689 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:31:11.422700 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:31:11.422711 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:31:11.422722 | orchestrator | 2025-05-13 20:31:11.422733 | orchestrator | TASK [Get container info] ****************************************************** 2025-05-13 20:31:11.422744 | orchestrator | Tuesday 13 May 2025 20:30:27 +0000 (0:00:01.563) 0:00:08.228 *********** 2025-05-13 20:31:11.422755 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:31:11.422766 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:31:11.422810 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:31:11.422821 | orchestrator | 2025-05-13 20:31:11.422832 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-05-13 20:31:11.422843 | orchestrator | Tuesday 13 May 2025 20:30:29 +0000 (0:00:01.913) 0:00:10.141 *********** 2025-05-13 20:31:11.422854 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:31:11.422865 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:31:11.422876 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:31:11.422889 | orchestrator | 2025-05-13 20:31:11.422901 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-05-13 20:31:11.422913 | orchestrator | Tuesday 13 May 2025 20:30:30 +0000 (0:00:01.249) 0:00:11.391 *********** 2025-05-13 20:31:11.422925 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:31:11.422937 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:31:11.422951 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:31:11.422963 | orchestrator | 2025-05-13 20:31:11.422975 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-13 20:31:11.422987 | orchestrator | Tuesday 13 May 2025 20:30:32 +0000 (0:00:01.449) 0:00:12.841 *********** 2025-05-13 20:31:11.422999 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:31:11.423011 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:31:11.423023 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:31:11.423035 | orchestrator | 2025-05-13 20:31:11.423047 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-05-13 20:31:11.423083 | orchestrator | Tuesday 13 May 2025 20:30:33 +0000 (0:00:01.226) 0:00:14.067 *********** 2025-05-13 20:31:11.423094 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:31:11.423105 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:31:11.423116 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:31:11.423127 | orchestrator | 2025-05-13 20:31:11.423137 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-05-13 20:31:11.423148 | orchestrator | Tuesday 13 May 2025 20:30:34 +0000 (0:00:01.242) 0:00:15.309 *********** 2025-05-13 20:31:11.423159 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:31:11.423169 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:31:11.423180 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:31:11.423190 | orchestrator | 2025-05-13 20:31:11.423201 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-13 20:31:11.423212 | orchestrator | Tuesday 13 May 2025 20:30:36 +0000 (0:00:01.827) 0:00:17.137 *********** 2025-05-13 20:31:11.423222 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:31:11.423233 | orchestrator | 2025-05-13 20:31:11.423244 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-13 20:31:11.423254 | orchestrator | Tuesday 13 May 2025 20:30:37 +0000 (0:00:01.172) 0:00:18.309 *********** 2025-05-13 20:31:11.423265 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:31:11.423276 | orchestrator | 2025-05-13 20:31:11.423286 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-13 20:31:11.423297 | orchestrator | Tuesday 13 May 2025 20:30:38 +0000 (0:00:01.145) 0:00:19.455 *********** 2025-05-13 20:31:11.423307 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:31:11.423318 | orchestrator | 2025-05-13 20:31:11.423329 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-13 20:31:11.423340 | orchestrator | Tuesday 13 May 2025 20:30:40 +0000 (0:00:01.218) 0:00:20.674 *********** 2025-05-13 20:31:11.423351 | orchestrator | 2025-05-13 20:31:11.423361 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-13 20:31:11.423372 | orchestrator | Tuesday 13 May 2025 20:30:40 +0000 (0:00:00.441) 0:00:21.115 *********** 2025-05-13 20:31:11.423383 | orchestrator | 2025-05-13 20:31:11.423394 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-13 20:31:11.423404 | orchestrator | Tuesday 13 May 2025 20:30:40 +0000 (0:00:00.387) 0:00:21.502 *********** 2025-05-13 20:31:11.423414 | orchestrator | 2025-05-13 20:31:11.423426 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-13 20:31:11.423441 | orchestrator | Tuesday 13 May 2025 20:30:41 +0000 (0:00:00.738) 0:00:22.241 *********** 2025-05-13 20:31:11.423459 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:31:11.423476 | orchestrator | 2025-05-13 20:31:11.423613 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-05-13 20:31:11.423640 | orchestrator | Tuesday 13 May 2025 20:30:42 +0000 (0:00:01.156) 0:00:23.397 *********** 2025-05-13 20:31:11.423657 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:31:11.423668 | orchestrator | 2025-05-13 20:31:11.423699 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-05-13 20:31:11.423711 | orchestrator | Tuesday 13 May 2025 20:30:43 +0000 (0:00:01.148) 0:00:24.546 *********** 2025-05-13 20:31:11.423722 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:31:11.423732 | orchestrator | 2025-05-13 20:31:11.423743 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-05-13 20:31:11.423753 | orchestrator | Tuesday 13 May 2025 20:30:44 +0000 (0:00:01.029) 0:00:25.575 *********** 2025-05-13 20:31:11.423764 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:31:11.423810 | orchestrator | 2025-05-13 20:31:11.423826 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-05-13 20:31:11.423837 | orchestrator | Tuesday 13 May 2025 20:30:47 +0000 (0:00:02.450) 0:00:28.025 *********** 2025-05-13 20:31:11.423848 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:31:11.423858 | orchestrator | 2025-05-13 20:31:11.423869 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-05-13 20:31:11.423892 | orchestrator | Tuesday 13 May 2025 20:30:48 +0000 (0:00:01.111) 0:00:29.137 *********** 2025-05-13 20:31:11.423903 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:31:11.423914 | orchestrator | 2025-05-13 20:31:11.423925 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-05-13 20:31:11.423936 | orchestrator | Tuesday 13 May 2025 20:30:49 +0000 (0:00:01.046) 0:00:30.184 *********** 2025-05-13 20:31:11.423947 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:31:11.423957 | orchestrator | 2025-05-13 20:31:11.423968 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-05-13 20:31:11.423979 | orchestrator | Tuesday 13 May 2025 20:30:50 +0000 (0:00:01.141) 0:00:31.325 *********** 2025-05-13 20:31:11.423992 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:31:11.424011 | orchestrator | 2025-05-13 20:31:11.424029 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-05-13 20:31:11.424044 | orchestrator | Tuesday 13 May 2025 20:30:51 +0000 (0:00:01.102) 0:00:32.428 *********** 2025-05-13 20:31:11.424055 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:31:11.424066 | orchestrator | 2025-05-13 20:31:11.424078 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-05-13 20:31:11.424088 | orchestrator | Tuesday 13 May 2025 20:30:52 +0000 (0:00:01.014) 0:00:33.442 *********** 2025-05-13 20:31:11.424108 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:31:11.424126 | orchestrator | 2025-05-13 20:31:11.424144 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-05-13 20:31:11.424163 | orchestrator | Tuesday 13 May 2025 20:30:53 +0000 (0:00:01.102) 0:00:34.545 *********** 2025-05-13 20:31:11.424182 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:31:11.424202 | orchestrator | 2025-05-13 20:31:11.424222 | orchestrator | TASK [Gather status data] ****************************************************** 2025-05-13 20:31:11.424241 | orchestrator | Tuesday 13 May 2025 20:30:55 +0000 (0:00:01.114) 0:00:35.659 *********** 2025-05-13 20:31:11.424299 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:31:11.424317 | orchestrator | 2025-05-13 20:31:11.424328 | orchestrator | TASK [Set health test data] **************************************************** 2025-05-13 20:31:11.424339 | orchestrator | Tuesday 13 May 2025 20:30:57 +0000 (0:00:02.210) 0:00:37.870 *********** 2025-05-13 20:31:11.424349 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:31:11.424360 | orchestrator | 2025-05-13 20:31:11.424370 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-05-13 20:31:11.424381 | orchestrator | Tuesday 13 May 2025 20:30:58 +0000 (0:00:01.207) 0:00:39.077 *********** 2025-05-13 20:31:11.424392 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:31:11.424402 | orchestrator | 2025-05-13 20:31:11.424413 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-05-13 20:31:11.424424 | orchestrator | Tuesday 13 May 2025 20:30:59 +0000 (0:00:01.075) 0:00:40.153 *********** 2025-05-13 20:31:11.424434 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:31:11.424445 | orchestrator | 2025-05-13 20:31:11.424455 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-05-13 20:31:11.424466 | orchestrator | Tuesday 13 May 2025 20:31:00 +0000 (0:00:01.077) 0:00:41.231 *********** 2025-05-13 20:31:11.424477 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:31:11.424487 | orchestrator | 2025-05-13 20:31:11.424498 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-05-13 20:31:11.424508 | orchestrator | Tuesday 13 May 2025 20:31:01 +0000 (0:00:00.994) 0:00:42.226 *********** 2025-05-13 20:31:11.424519 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:31:11.424529 | orchestrator | 2025-05-13 20:31:11.424540 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-05-13 20:31:11.424551 | orchestrator | Tuesday 13 May 2025 20:31:02 +0000 (0:00:00.989) 0:00:43.216 *********** 2025-05-13 20:31:11.424561 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-13 20:31:11.424572 | orchestrator | 2025-05-13 20:31:11.424582 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-05-13 20:31:11.424602 | orchestrator | Tuesday 13 May 2025 20:31:03 +0000 (0:00:01.298) 0:00:44.514 *********** 2025-05-13 20:31:11.424613 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:31:11.424624 | orchestrator | 2025-05-13 20:31:11.424635 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-13 20:31:11.424646 | orchestrator | Tuesday 13 May 2025 20:31:04 +0000 (0:00:01.107) 0:00:45.622 *********** 2025-05-13 20:31:11.424656 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-13 20:31:11.424667 | orchestrator | 2025-05-13 20:31:11.424684 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-13 20:31:11.424695 | orchestrator | Tuesday 13 May 2025 20:31:07 +0000 (0:00:02.621) 0:00:48.244 *********** 2025-05-13 20:31:11.424705 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-13 20:31:11.424716 | orchestrator | 2025-05-13 20:31:11.424727 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-13 20:31:11.424737 | orchestrator | Tuesday 13 May 2025 20:31:08 +0000 (0:00:01.136) 0:00:49.380 *********** 2025-05-13 20:31:11.424747 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-13 20:31:11.424758 | orchestrator | 2025-05-13 20:31:11.424824 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-13 20:31:16.308185 | orchestrator | Tuesday 13 May 2025 20:31:09 +0000 (0:00:01.116) 0:00:50.497 *********** 2025-05-13 20:31:16.308321 | orchestrator | 2025-05-13 20:31:16.308350 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-13 20:31:16.308372 | orchestrator | Tuesday 13 May 2025 20:31:10 +0000 (0:00:00.405) 0:00:50.902 *********** 2025-05-13 20:31:16.308393 | orchestrator | 2025-05-13 20:31:16.308413 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-13 20:31:16.308433 | orchestrator | Tuesday 13 May 2025 20:31:10 +0000 (0:00:00.416) 0:00:51.318 *********** 2025-05-13 20:31:16.308449 | orchestrator | 2025-05-13 20:31:16.308460 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-05-13 20:31:16.308471 | orchestrator | Tuesday 13 May 2025 20:31:11 +0000 (0:00:00.715) 0:00:52.034 *********** 2025-05-13 20:31:16.308483 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-13 20:31:16.308494 | orchestrator | 2025-05-13 20:31:16.308505 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-13 20:31:16.308536 | orchestrator | Tuesday 13 May 2025 20:31:13 +0000 (0:00:02.436) 0:00:54.471 *********** 2025-05-13 20:31:16.308547 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-05-13 20:31:16.308559 | orchestrator |  "msg": [ 2025-05-13 20:31:16.308571 | orchestrator |  "Validator run completed.", 2025-05-13 20:31:16.308583 | orchestrator |  "You can find the report file here:", 2025-05-13 20:31:16.308594 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-05-13T20:30:21+00:00-report.json", 2025-05-13 20:31:16.308606 | orchestrator |  "on the following host:", 2025-05-13 20:31:16.308617 | orchestrator |  "testbed-manager" 2025-05-13 20:31:16.308627 | orchestrator |  ] 2025-05-13 20:31:16.308638 | orchestrator | } 2025-05-13 20:31:16.308649 | orchestrator | 2025-05-13 20:31:16.308660 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:31:16.308672 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-13 20:31:16.308684 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 20:31:16.308695 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 20:31:16.308706 | orchestrator | 2025-05-13 20:31:16.308718 | orchestrator | 2025-05-13 20:31:16.308730 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:31:16.308798 | orchestrator | Tuesday 13 May 2025 20:31:15 +0000 (0:00:02.054) 0:00:56.525 *********** 2025-05-13 20:31:16.308812 | orchestrator | =============================================================================== 2025-05-13 20:31:16.308825 | orchestrator | Aggregate test results step one ----------------------------------------- 2.62s 2025-05-13 20:31:16.308837 | orchestrator | Get monmap info from one mon container ---------------------------------- 2.45s 2025-05-13 20:31:16.308849 | orchestrator | Write report file ------------------------------------------------------- 2.44s 2025-05-13 20:31:16.308861 | orchestrator | Get timestamp for report file ------------------------------------------- 2.21s 2025-05-13 20:31:16.308873 | orchestrator | Gather status data ------------------------------------------------------ 2.21s 2025-05-13 20:31:16.308886 | orchestrator | Print report file information ------------------------------------------- 2.05s 2025-05-13 20:31:16.308898 | orchestrator | Get container info ------------------------------------------------------ 1.91s 2025-05-13 20:31:16.308910 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 1.83s 2025-05-13 20:31:16.308922 | orchestrator | Create report output directory ------------------------------------------ 1.74s 2025-05-13 20:31:16.308934 | orchestrator | Flush handlers ---------------------------------------------------------- 1.57s 2025-05-13 20:31:16.308946 | orchestrator | Prepare test data for container existance test -------------------------- 1.56s 2025-05-13 20:31:16.308958 | orchestrator | Flush handlers ---------------------------------------------------------- 1.54s 2025-05-13 20:31:16.308970 | orchestrator | Set test result to passed if container is existing ---------------------- 1.45s 2025-05-13 20:31:16.308983 | orchestrator | Set validation result to passed if no test failed ----------------------- 1.30s 2025-05-13 20:31:16.308995 | orchestrator | Set test result to failed if container is missing ----------------------- 1.25s 2025-05-13 20:31:16.309008 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 1.24s 2025-05-13 20:31:16.309020 | orchestrator | Prepare test data ------------------------------------------------------- 1.23s 2025-05-13 20:31:16.309031 | orchestrator | Aggregate test results step three --------------------------------------- 1.22s 2025-05-13 20:31:16.309043 | orchestrator | Set health test data ---------------------------------------------------- 1.21s 2025-05-13 20:31:16.309056 | orchestrator | Aggregate test results step one ----------------------------------------- 1.17s 2025-05-13 20:31:16.579460 | orchestrator | + osism validate ceph-mgrs 2025-05-13 20:32:07.619807 | orchestrator | 2025-05-13 20:32:07.619962 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-05-13 20:32:07.620000 | orchestrator | 2025-05-13 20:32:07.620026 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-05-13 20:32:07.620045 | orchestrator | Tuesday 13 May 2025 20:31:23 +0000 (0:00:01.515) 0:00:01.515 *********** 2025-05-13 20:32:07.620065 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-13 20:32:07.620085 | orchestrator | 2025-05-13 20:32:07.620103 | orchestrator | TASK [Create report output directory] ****************************************** 2025-05-13 20:32:07.620144 | orchestrator | Tuesday 13 May 2025 20:31:26 +0000 (0:00:02.241) 0:00:03.756 *********** 2025-05-13 20:32:07.620165 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-13 20:32:07.620184 | orchestrator | 2025-05-13 20:32:07.620203 | orchestrator | TASK [Define report vars] ****************************************************** 2025-05-13 20:32:07.620222 | orchestrator | Tuesday 13 May 2025 20:31:27 +0000 (0:00:01.600) 0:00:05.357 *********** 2025-05-13 20:32:07.620242 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:32:07.620262 | orchestrator | 2025-05-13 20:32:07.620280 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-05-13 20:32:07.620301 | orchestrator | Tuesday 13 May 2025 20:31:28 +0000 (0:00:01.004) 0:00:06.362 *********** 2025-05-13 20:32:07.620320 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:32:07.620340 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:32:07.620359 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:32:07.620409 | orchestrator | 2025-05-13 20:32:07.620429 | orchestrator | TASK [Get container info] ****************************************************** 2025-05-13 20:32:07.620448 | orchestrator | Tuesday 13 May 2025 20:31:30 +0000 (0:00:01.632) 0:00:07.994 *********** 2025-05-13 20:32:07.620467 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:32:07.620486 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:32:07.620504 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:32:07.620520 | orchestrator | 2025-05-13 20:32:07.620539 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-05-13 20:32:07.620556 | orchestrator | Tuesday 13 May 2025 20:31:32 +0000 (0:00:01.912) 0:00:09.907 *********** 2025-05-13 20:32:07.620574 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:32:07.620592 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:32:07.620609 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:32:07.620625 | orchestrator | 2025-05-13 20:32:07.620643 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-05-13 20:32:07.620661 | orchestrator | Tuesday 13 May 2025 20:31:33 +0000 (0:00:01.243) 0:00:11.150 *********** 2025-05-13 20:32:07.620678 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:32:07.620694 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:32:07.620711 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:32:07.620797 | orchestrator | 2025-05-13 20:32:07.620817 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-13 20:32:07.620835 | orchestrator | Tuesday 13 May 2025 20:31:34 +0000 (0:00:01.425) 0:00:12.576 *********** 2025-05-13 20:32:07.620851 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:32:07.620866 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:32:07.620884 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:32:07.620902 | orchestrator | 2025-05-13 20:32:07.620920 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-05-13 20:32:07.620937 | orchestrator | Tuesday 13 May 2025 20:31:36 +0000 (0:00:01.235) 0:00:13.811 *********** 2025-05-13 20:32:07.620954 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:32:07.620971 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:32:07.620989 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:32:07.621006 | orchestrator | 2025-05-13 20:32:07.621024 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-05-13 20:32:07.621041 | orchestrator | Tuesday 13 May 2025 20:31:37 +0000 (0:00:01.249) 0:00:15.061 *********** 2025-05-13 20:32:07.621057 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:32:07.621075 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:32:07.621093 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:32:07.621111 | orchestrator | 2025-05-13 20:32:07.621130 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-13 20:32:07.621146 | orchestrator | Tuesday 13 May 2025 20:31:38 +0000 (0:00:01.253) 0:00:16.315 *********** 2025-05-13 20:32:07.621163 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:32:07.621179 | orchestrator | 2025-05-13 20:32:07.621196 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-13 20:32:07.621214 | orchestrator | Tuesday 13 May 2025 20:31:39 +0000 (0:00:01.300) 0:00:17.615 *********** 2025-05-13 20:32:07.621230 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:32:07.621246 | orchestrator | 2025-05-13 20:32:07.621263 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-13 20:32:07.621280 | orchestrator | Tuesday 13 May 2025 20:31:40 +0000 (0:00:01.091) 0:00:18.707 *********** 2025-05-13 20:32:07.621296 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:32:07.621313 | orchestrator | 2025-05-13 20:32:07.621330 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-13 20:32:07.621348 | orchestrator | Tuesday 13 May 2025 20:31:42 +0000 (0:00:01.141) 0:00:19.849 *********** 2025-05-13 20:32:07.621367 | orchestrator | 2025-05-13 20:32:07.621386 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-13 20:32:07.621403 | orchestrator | Tuesday 13 May 2025 20:31:42 +0000 (0:00:00.398) 0:00:20.248 *********** 2025-05-13 20:32:07.621441 | orchestrator | 2025-05-13 20:32:07.621459 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-13 20:32:07.621477 | orchestrator | Tuesday 13 May 2025 20:31:42 +0000 (0:00:00.397) 0:00:20.645 *********** 2025-05-13 20:32:07.621495 | orchestrator | 2025-05-13 20:32:07.621512 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-13 20:32:07.621531 | orchestrator | Tuesday 13 May 2025 20:31:43 +0000 (0:00:00.704) 0:00:21.349 *********** 2025-05-13 20:32:07.621549 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:32:07.621566 | orchestrator | 2025-05-13 20:32:07.621583 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-05-13 20:32:07.621602 | orchestrator | Tuesday 13 May 2025 20:31:44 +0000 (0:00:01.118) 0:00:22.468 *********** 2025-05-13 20:32:07.621620 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:32:07.621638 | orchestrator | 2025-05-13 20:32:07.621689 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-05-13 20:32:07.621709 | orchestrator | Tuesday 13 May 2025 20:31:45 +0000 (0:00:01.120) 0:00:23.588 *********** 2025-05-13 20:32:07.621758 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:32:07.621777 | orchestrator | 2025-05-13 20:32:07.621797 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-05-13 20:32:07.621815 | orchestrator | Tuesday 13 May 2025 20:31:46 +0000 (0:00:00.958) 0:00:24.547 *********** 2025-05-13 20:32:07.621834 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:32:07.621853 | orchestrator | 2025-05-13 20:32:07.621881 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-05-13 20:32:07.621899 | orchestrator | Tuesday 13 May 2025 20:31:49 +0000 (0:00:02.774) 0:00:27.322 *********** 2025-05-13 20:32:07.621916 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:32:07.621933 | orchestrator | 2025-05-13 20:32:07.621974 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-05-13 20:32:07.621993 | orchestrator | Tuesday 13 May 2025 20:31:50 +0000 (0:00:01.251) 0:00:28.574 *********** 2025-05-13 20:32:07.622011 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:32:07.622118 | orchestrator | 2025-05-13 20:32:07.622162 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-05-13 20:32:07.622185 | orchestrator | Tuesday 13 May 2025 20:31:51 +0000 (0:00:01.144) 0:00:29.718 *********** 2025-05-13 20:32:07.622204 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:32:07.622222 | orchestrator | 2025-05-13 20:32:07.622241 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-05-13 20:32:07.622259 | orchestrator | Tuesday 13 May 2025 20:31:53 +0000 (0:00:01.068) 0:00:30.786 *********** 2025-05-13 20:32:07.622278 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:32:07.622297 | orchestrator | 2025-05-13 20:32:07.622315 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-05-13 20:32:07.622343 | orchestrator | Tuesday 13 May 2025 20:31:54 +0000 (0:00:01.045) 0:00:31.832 *********** 2025-05-13 20:32:07.622362 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-13 20:32:07.622380 | orchestrator | 2025-05-13 20:32:07.622399 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-05-13 20:32:07.622418 | orchestrator | Tuesday 13 May 2025 20:31:55 +0000 (0:00:01.224) 0:00:33.056 *********** 2025-05-13 20:32:07.622436 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:32:07.622454 | orchestrator | 2025-05-13 20:32:07.622472 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-13 20:32:07.622491 | orchestrator | Tuesday 13 May 2025 20:31:56 +0000 (0:00:01.186) 0:00:34.243 *********** 2025-05-13 20:32:07.622509 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-13 20:32:07.622528 | orchestrator | 2025-05-13 20:32:07.622546 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-13 20:32:07.622564 | orchestrator | Tuesday 13 May 2025 20:31:59 +0000 (0:00:02.668) 0:00:36.912 *********** 2025-05-13 20:32:07.622582 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-13 20:32:07.622619 | orchestrator | 2025-05-13 20:32:07.622637 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-13 20:32:07.622655 | orchestrator | Tuesday 13 May 2025 20:32:00 +0000 (0:00:01.235) 0:00:38.147 *********** 2025-05-13 20:32:07.622673 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-13 20:32:07.622692 | orchestrator | 2025-05-13 20:32:07.622711 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-13 20:32:07.622788 | orchestrator | Tuesday 13 May 2025 20:32:01 +0000 (0:00:01.209) 0:00:39.357 *********** 2025-05-13 20:32:07.622802 | orchestrator | 2025-05-13 20:32:07.622813 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-13 20:32:07.622824 | orchestrator | Tuesday 13 May 2025 20:32:02 +0000 (0:00:00.431) 0:00:39.789 *********** 2025-05-13 20:32:07.622834 | orchestrator | 2025-05-13 20:32:07.622845 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-13 20:32:07.622856 | orchestrator | Tuesday 13 May 2025 20:32:02 +0000 (0:00:00.424) 0:00:40.214 *********** 2025-05-13 20:32:07.622866 | orchestrator | 2025-05-13 20:32:07.622877 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-05-13 20:32:07.622888 | orchestrator | Tuesday 13 May 2025 20:32:03 +0000 (0:00:00.941) 0:00:41.155 *********** 2025-05-13 20:32:07.622899 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-13 20:32:07.622910 | orchestrator | 2025-05-13 20:32:07.622920 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-13 20:32:07.622931 | orchestrator | Tuesday 13 May 2025 20:32:05 +0000 (0:00:02.247) 0:00:43.403 *********** 2025-05-13 20:32:07.622942 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-05-13 20:32:07.622953 | orchestrator |  "msg": [ 2025-05-13 20:32:07.622965 | orchestrator |  "Validator run completed.", 2025-05-13 20:32:07.622977 | orchestrator |  "You can find the report file here:", 2025-05-13 20:32:07.622988 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-05-13T20:31:24+00:00-report.json", 2025-05-13 20:32:07.623000 | orchestrator |  "on the following host:", 2025-05-13 20:32:07.623011 | orchestrator |  "testbed-manager" 2025-05-13 20:32:07.623022 | orchestrator |  ] 2025-05-13 20:32:07.623034 | orchestrator | } 2025-05-13 20:32:07.623045 | orchestrator | 2025-05-13 20:32:07.623056 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:32:07.623139 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-13 20:32:07.623155 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 20:32:07.623186 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 20:32:07.987827 | orchestrator | 2025-05-13 20:32:07.987925 | orchestrator | 2025-05-13 20:32:07.987936 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:32:07.987947 | orchestrator | Tuesday 13 May 2025 20:32:07 +0000 (0:00:01.940) 0:00:45.343 *********** 2025-05-13 20:32:07.987956 | orchestrator | =============================================================================== 2025-05-13 20:32:07.987968 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.77s 2025-05-13 20:32:07.987982 | orchestrator | Aggregate test results step one ----------------------------------------- 2.67s 2025-05-13 20:32:07.987996 | orchestrator | Write report file ------------------------------------------------------- 2.25s 2025-05-13 20:32:07.988010 | orchestrator | Get timestamp for report file ------------------------------------------- 2.24s 2025-05-13 20:32:07.988025 | orchestrator | Print report file information ------------------------------------------- 1.94s 2025-05-13 20:32:07.988039 | orchestrator | Get container info ------------------------------------------------------ 1.91s 2025-05-13 20:32:07.988076 | orchestrator | Flush handlers ---------------------------------------------------------- 1.80s 2025-05-13 20:32:07.988085 | orchestrator | Prepare test data for container existance test -------------------------- 1.63s 2025-05-13 20:32:07.988094 | orchestrator | Create report output directory ------------------------------------------ 1.60s 2025-05-13 20:32:07.988103 | orchestrator | Flush handlers ---------------------------------------------------------- 1.50s 2025-05-13 20:32:07.988112 | orchestrator | Set test result to passed if container is existing ---------------------- 1.43s 2025-05-13 20:32:07.988120 | orchestrator | Aggregate test results step one ----------------------------------------- 1.30s 2025-05-13 20:32:07.988142 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 1.25s 2025-05-13 20:32:07.988151 | orchestrator | Parse mgr module list from json ----------------------------------------- 1.25s 2025-05-13 20:32:07.988160 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 1.25s 2025-05-13 20:32:07.988169 | orchestrator | Set test result to failed if container is missing ----------------------- 1.24s 2025-05-13 20:32:07.988177 | orchestrator | Aggregate test results step two ----------------------------------------- 1.24s 2025-05-13 20:32:07.988186 | orchestrator | Prepare test data ------------------------------------------------------- 1.24s 2025-05-13 20:32:07.988195 | orchestrator | Set validation result to passed if no test failed ----------------------- 1.22s 2025-05-13 20:32:07.988204 | orchestrator | Aggregate test results step three --------------------------------------- 1.21s 2025-05-13 20:32:08.281260 | orchestrator | + osism validate ceph-osds 2025-05-13 20:32:30.754993 | orchestrator | 2025-05-13 20:32:30.755140 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-05-13 20:32:30.755171 | orchestrator | 2025-05-13 20:32:30.755191 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-05-13 20:32:30.755209 | orchestrator | Tuesday 13 May 2025 20:32:16 +0000 (0:00:01.581) 0:00:01.581 *********** 2025-05-13 20:32:30.755227 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-13 20:32:30.755244 | orchestrator | 2025-05-13 20:32:30.755261 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-13 20:32:30.755280 | orchestrator | Tuesday 13 May 2025 20:32:18 +0000 (0:00:02.503) 0:00:04.084 *********** 2025-05-13 20:32:30.755299 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-13 20:32:30.755318 | orchestrator | 2025-05-13 20:32:30.755334 | orchestrator | TASK [Create report output directory] ****************************************** 2025-05-13 20:32:30.755350 | orchestrator | Tuesday 13 May 2025 20:32:19 +0000 (0:00:01.325) 0:00:05.410 *********** 2025-05-13 20:32:30.755366 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-13 20:32:30.755381 | orchestrator | 2025-05-13 20:32:30.755397 | orchestrator | TASK [Define report vars] ****************************************************** 2025-05-13 20:32:30.755414 | orchestrator | Tuesday 13 May 2025 20:32:21 +0000 (0:00:01.755) 0:00:07.166 *********** 2025-05-13 20:32:30.755430 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:32:30.755448 | orchestrator | 2025-05-13 20:32:30.755464 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-05-13 20:32:30.755480 | orchestrator | Tuesday 13 May 2025 20:32:22 +0000 (0:00:01.015) 0:00:08.181 *********** 2025-05-13 20:32:30.755497 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:32:30.755516 | orchestrator | 2025-05-13 20:32:30.755534 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-05-13 20:32:30.755551 | orchestrator | Tuesday 13 May 2025 20:32:23 +0000 (0:00:01.034) 0:00:09.216 *********** 2025-05-13 20:32:30.755569 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:32:30.755586 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:32:30.755603 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:32:30.755621 | orchestrator | 2025-05-13 20:32:30.755639 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-05-13 20:32:30.755657 | orchestrator | Tuesday 13 May 2025 20:32:25 +0000 (0:00:01.581) 0:00:10.798 *********** 2025-05-13 20:32:30.755782 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:32:30.755808 | orchestrator | 2025-05-13 20:32:30.755825 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-05-13 20:32:30.755845 | orchestrator | Tuesday 13 May 2025 20:32:26 +0000 (0:00:01.035) 0:00:11.833 *********** 2025-05-13 20:32:30.755861 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:32:30.755877 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:32:30.755893 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:32:30.755909 | orchestrator | 2025-05-13 20:32:30.755925 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-05-13 20:32:30.755941 | orchestrator | Tuesday 13 May 2025 20:32:27 +0000 (0:00:01.231) 0:00:13.064 *********** 2025-05-13 20:32:30.755957 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:32:30.755973 | orchestrator | 2025-05-13 20:32:30.755989 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-13 20:32:30.756005 | orchestrator | Tuesday 13 May 2025 20:32:28 +0000 (0:00:01.487) 0:00:14.552 *********** 2025-05-13 20:32:30.756021 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:32:30.756037 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:32:30.756053 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:32:30.756069 | orchestrator | 2025-05-13 20:32:30.756085 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-05-13 20:32:30.756102 | orchestrator | Tuesday 13 May 2025 20:32:30 +0000 (0:00:01.505) 0:00:16.057 *********** 2025-05-13 20:32:30.756123 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'aad6867f00cdff147829dba915dc2447a7f7ce8bb4ef120d942e6406fb5cb92b', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-05-13 20:32:30.756143 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9c05d9c935037c6d2026a6a4c670d88d5e820bbcf8b7923d4e61d63c784f9205', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-05-13 20:32:30.756165 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2e54c45f7fc60f36f4050465611de125ed486eb5ba840c656d2ac46f55c42355', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-13 20:32:30.756183 | orchestrator | skipping: [testbed-node-3] => (item={'id': '155c1a1952b2d70c5ff3e15687ac344d984849df26e78b7c0861054522f3fc40', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-13 20:32:30.756214 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4ba3d457cc159bf459f6569f13fc2caaaf51308cf76a2c129505fe7c30f31ab1', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-05-13 20:32:30.756263 | orchestrator | skipping: [testbed-node-3] => (item={'id': '868756c932ddcfed912ed83348d18d7543485e9e1b6826fb7983def804e5c978', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-05-13 20:32:30.756285 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b169e403fa879bd69cac5e807fda0ea5c9b4f4f690ff36fad09f95204777d7e2', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-05-13 20:32:30.756302 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bcaa3736545036a1b9837a7c95ef74abe259386929c5ea9a07062d7bebc1b9dd', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-05-13 20:32:30.756319 | orchestrator | skipping: [testbed-node-3] => (item={'id': '85c451a99209a09c9f4ba67de8dd9ac87a68ff1a46f957bb65af0969bfa6a2d9', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-05-13 20:32:30.756356 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'af0f96e44b20cacddcc549f24f23fe5464c6d757a82765a264356fb04bc29136', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-05-13 20:32:30.756373 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1245282754e170cef5732e0b3ad4aa9e9be8b178424851ea8cbc7ad1549fdc18', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-05-13 20:32:30.756393 | orchestrator | skipping: [testbed-node-3] => (item={'id': '65b13c4f8b85988404f3df995f51f967461cda8e17333fd5d4a07833050f658d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-05-13 20:32:30.756410 | orchestrator | ok: [testbed-node-3] => (item={'id': '9f6d57f501d35f9bafde21ef0d122105840c2974c5241b1cc32ea2c4589a11bb', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-05-13 20:32:30.756427 | orchestrator | ok: [testbed-node-3] => (item={'id': '01c4f180eb7309808e9c09e1261ce6bf9b856b01e6e4189751bde20acdb3dd05', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-05-13 20:32:30.756444 | orchestrator | skipping: [testbed-node-3] => (item={'id': '455dbddbf2082f7b069c67c9d9d0e96099fa225025919dd4b49a9b2edb6e16b9', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-05-13 20:32:30.756475 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4699fa0f9adf0aa91b9bc5cb1529369e8bbc6d89a75f30231e1f82138e35dc86', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-05-13 20:32:30.756493 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6576bef4d7e88117cd55fdd8108456d7a4c2bcb8fd683a7896523c826e373449', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-05-13 20:32:30.756510 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7bb60a0ca984e9dbb05dc7139712f05e9676b8563dc247245bcd16e56bd15498', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-05-13 20:32:30.756527 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9204d536b014882f31cf892523605bf25838ac262ee5417db443def4494748be', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-05-13 20:32:30.756550 | orchestrator | skipping: [testbed-node-3] => (item={'id': '77b820f7174cd926f4c5c48a8a58e0bfcb66a2133032ca1bdce80064ec3fe143', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-05-13 20:32:30.756568 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8fe15afc05a3b78ee10203e0c587f77d93b2c25ee07c9aa67418f3b593236234', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-05-13 20:32:30.756599 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ed112631b3ee47a78a3e78342defe6de5c8920b7a84419548c00b7ae1fb9135c', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-05-13 20:32:31.980270 | orchestrator | skipping: [testbed-node-4] => (item={'id': '22d3d123606d3d033a35103c0f038fe49f1fe4711077a2950365af8c8bf742a8', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-13 20:32:31.980396 | orchestrator | skipping: [testbed-node-4] => (item={'id': '16ef44fd7ae9d9e5f63cba10ccb4d929ce8abc3536bec821a2e6d8794d712835', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-13 20:32:31.980414 | orchestrator | skipping: [testbed-node-4] => (item={'id': '515a8fb0c090db92025f7206f7c5a7554a6d2bac1d47d5fd00b29d466c758039', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-05-13 20:32:31.980425 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1f2bed4e84a7e2acf9d9660de8e8b7f9d9f2823323985b9919a7cb619dc8cdaa', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-05-13 20:32:31.980433 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c5bac49a8169ef938e3de82113a9b07752b10d3889eca64a4ab7e568bf8925c2', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-05-13 20:32:31.980441 | orchestrator | skipping: [testbed-node-4] => (item={'id': '757eb847fe7e5f5cedda5ba1c58fd422599612c3ab7fc79b58f325d9e5b2389f', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-05-13 20:32:31.980448 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b742037895041e8b33ea92a8be28841473d2cb39a24b2b4957dc91000f524ff4', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-05-13 20:32:31.980456 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2cfad04e4f7be810d8346ff7519c8788d1c7b2e8323d5f1d897092cdfceaeef3', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-05-13 20:32:31.980463 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4985b89823e073ee4e08ea40c2951f743d677ce7d5737aec2fd6d9bf84f94378', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-05-13 20:32:31.980471 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2c0e5809b5a90fb322a6c65c2b9ae7e0f3add6955ec6aefe81d584e7b9b4e177', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-05-13 20:32:31.980479 | orchestrator | ok: [testbed-node-4] => (item={'id': '41ebc72c3738bc1989c6f30300dd9aad395b2b18ae81f85f60e9b476bf2f1fe7', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-05-13 20:32:31.980488 | orchestrator | ok: [testbed-node-4] => (item={'id': '92014d1d9a866c046fe309653c9c8e20702763117a7b3b512d429a64bee9f04b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-05-13 20:32:31.980507 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f6a53609ae537d69c290d98d3e3896f04f79783f1ee3e676e3979870fe59d098', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-05-13 20:32:31.980515 | orchestrator | skipping: [testbed-node-4] => (item={'id': '90d195fbc7634c2fdf97360b5d64403a89a3cd196331438f4456102c13739a61', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-05-13 20:32:31.980523 | orchestrator | skipping: [testbed-node-4] => (item={'id': '60cb47fb67f361e8d7e1748c8e12b72e37d328a5182e6bbf8ab6d0323ce4ccfe', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-05-13 20:32:31.980549 | orchestrator | skipping: [testbed-node-4] => (item={'id': '93c3cbc4b7e5d904c5fb8da74524f02adb01f3ab1237c48354ad614dfb95abb1', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-05-13 20:32:31.980557 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7b96c207b9490e060edba2ec2889bdcbf0e12adae135fd8829d7fc35708bc03f', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-05-13 20:32:31.980565 | orchestrator | skipping: [testbed-node-4] => (item={'id': '473931abd8ff74533b4d06a0eb66e84b2b0fd46c9ba49b536a855b04e152b9fc', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-05-13 20:32:31.980573 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'de434eb7b1409e7369efa013c6af83b825dbf84e21707ff03dc9a10e54ff22d7', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-05-13 20:32:31.980580 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0fdb5d63ca5746fa356e425077704106dfe487ef68e39119c08774792c736371', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-05-13 20:32:31.980588 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7452d7dcebfdc68078e892b18a7d41836690c0cd21c72d31ba8b37ebafb87b82', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-13 20:32:31.980595 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a52635fcb9b311c08e43dcf7986e5783c7824631789a2cdd2b8d5459f041c0e1', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-13 20:32:31.980603 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8e0b98124bb44b91cfe78456a31e607561e9e8800e8feb6509d015222cd90a86', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-05-13 20:32:31.980610 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4d146bd13fda983d8d582433089951f32e9e4e4d429da847387107161bad29a9', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-05-13 20:32:31.980618 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'be64aa2d1e1af84bcdec0857e4f90a3bc22962c044c647b41707fe87b8750947', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-05-13 20:32:31.980625 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0b068c6e6cd88f34171f522e6facc2b0520e156de4dcda87db90d83127dedfb3', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-05-13 20:32:31.980632 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b8dd9a764e620725e7133b68fb04aa769f64105b71113d8dc5bb0f2c789899fc', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-05-13 20:32:31.980639 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9ee6cf566a58df2b28dd36afd2f4994961d782ef597c9babfe7d974d11f6cef8', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-05-13 20:32:31.980650 | orchestrator | skipping: [testbed-node-5] => (item={'id': '56876ae184be70d8ab4be003feb106edc97ffd3557263b36b9dd10a6a4468741', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-05-13 20:32:31.980663 | orchestrator | skipping: [testbed-node-5] => (item={'id': '05f8b2cf27628fccdacf9e13a24d955523aae545bbc831cc783c5bf37ecec1c3', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-05-13 20:32:31.980676 | orchestrator | ok: [testbed-node-5] => (item={'id': '33aa5f2a67bf10dc0abd70c74f8af9d4f9f839d04e587b01e64edc23b3fb59a6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-05-13 20:33:02.786588 | orchestrator | ok: [testbed-node-5] => (item={'id': 'cc916b3f3f1a2dc404dc718b0ff468dd440decc42f4c79a95e4105e8b834eeb6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-05-13 20:33:02.786739 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7fd679e1ce99e046c806725a746daf513e58054e66c193459c86491bf0b7b9c9', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-05-13 20:33:02.786761 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f264bc23ae99eecf85a8bbb9d53ac779356510feefb2493beb6746b1f47fb033', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-05-13 20:33:02.786776 | orchestrator | skipping: [testbed-node-5] => (item={'id': '774b21dd15d68e2db646c1b061243ba7d071c1e8364b5dcfa5f1e083d371a7dc', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-05-13 20:33:02.786788 | orchestrator | skipping: [testbed-node-5] => (item={'id': '090c0b1699b2321eb29028e9d07ece4cf4bc6eb5bb39059b08634ad7771bb2e5', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-05-13 20:33:02.786799 | orchestrator | skipping: [testbed-node-5] => (item={'id': '579a70db396e1eb315b2eef6e5553088bb32ceea0e2062d3fa36fb6c23849295', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-05-13 20:33:02.786811 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5571a668cfcafbadedbd7601b3233646f68b5f72cc92d456001ed687a8421eb6', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-05-13 20:33:02.786823 | orchestrator | 2025-05-13 20:33:02.786835 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-05-13 20:33:02.786848 | orchestrator | Tuesday 13 May 2025 20:32:31 +0000 (0:00:01.474) 0:00:17.532 *********** 2025-05-13 20:33:02.786859 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:33:02.786871 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:33:02.786882 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:33:02.786893 | orchestrator | 2025-05-13 20:33:02.786904 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-05-13 20:33:02.786915 | orchestrator | Tuesday 13 May 2025 20:32:33 +0000 (0:00:01.235) 0:00:18.767 *********** 2025-05-13 20:33:02.786926 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:33:02.786939 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:33:02.786949 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:33:02.786962 | orchestrator | 2025-05-13 20:33:02.786980 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-05-13 20:33:02.786998 | orchestrator | Tuesday 13 May 2025 20:32:34 +0000 (0:00:01.424) 0:00:20.192 *********** 2025-05-13 20:33:02.787016 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:33:02.787045 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:33:02.787065 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:33:02.787081 | orchestrator | 2025-05-13 20:33:02.787098 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-13 20:33:02.787139 | orchestrator | Tuesday 13 May 2025 20:32:36 +0000 (0:00:01.392) 0:00:21.584 *********** 2025-05-13 20:33:02.787156 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:33:02.787170 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:33:02.787186 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:33:02.787203 | orchestrator | 2025-05-13 20:33:02.787219 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-05-13 20:33:02.787238 | orchestrator | Tuesday 13 May 2025 20:32:37 +0000 (0:00:01.273) 0:00:22.857 *********** 2025-05-13 20:33:02.787255 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-05-13 20:33:02.787274 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-05-13 20:33:02.787292 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:33:02.787310 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-05-13 20:33:02.787328 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-05-13 20:33:02.787347 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:33:02.787406 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-05-13 20:33:02.787427 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-05-13 20:33:02.787438 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:33:02.787449 | orchestrator | 2025-05-13 20:33:02.787460 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-05-13 20:33:02.787471 | orchestrator | Tuesday 13 May 2025 20:32:38 +0000 (0:00:01.335) 0:00:24.193 *********** 2025-05-13 20:33:02.787481 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:33:02.787492 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:33:02.787503 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:33:02.787513 | orchestrator | 2025-05-13 20:33:02.787548 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-05-13 20:33:02.787595 | orchestrator | Tuesday 13 May 2025 20:32:40 +0000 (0:00:01.433) 0:00:25.627 *********** 2025-05-13 20:33:02.787615 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:33:02.787634 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:33:02.787652 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:33:02.787670 | orchestrator | 2025-05-13 20:33:02.787758 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-05-13 20:33:02.787780 | orchestrator | Tuesday 13 May 2025 20:32:41 +0000 (0:00:01.351) 0:00:26.979 *********** 2025-05-13 20:33:02.787799 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:33:02.787817 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:33:02.787835 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:33:02.787846 | orchestrator | 2025-05-13 20:33:02.787857 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-05-13 20:33:02.787869 | orchestrator | Tuesday 13 May 2025 20:32:42 +0000 (0:00:01.291) 0:00:28.271 *********** 2025-05-13 20:33:02.787880 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:33:02.787891 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:33:02.787901 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:33:02.787912 | orchestrator | 2025-05-13 20:33:02.787923 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-13 20:33:02.787933 | orchestrator | Tuesday 13 May 2025 20:32:44 +0000 (0:00:01.425) 0:00:29.696 *********** 2025-05-13 20:33:02.787944 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:33:02.787955 | orchestrator | 2025-05-13 20:33:02.787965 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-13 20:33:02.787976 | orchestrator | Tuesday 13 May 2025 20:32:45 +0000 (0:00:01.409) 0:00:31.105 *********** 2025-05-13 20:33:02.787987 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:33:02.787997 | orchestrator | 2025-05-13 20:33:02.788008 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-13 20:33:02.788032 | orchestrator | Tuesday 13 May 2025 20:32:46 +0000 (0:00:01.125) 0:00:32.231 *********** 2025-05-13 20:33:02.788043 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:33:02.788053 | orchestrator | 2025-05-13 20:33:02.788064 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-13 20:33:02.788075 | orchestrator | Tuesday 13 May 2025 20:32:47 +0000 (0:00:01.136) 0:00:33.368 *********** 2025-05-13 20:33:02.788086 | orchestrator | 2025-05-13 20:33:02.788097 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-13 20:33:02.788107 | orchestrator | Tuesday 13 May 2025 20:32:48 +0000 (0:00:00.400) 0:00:33.768 *********** 2025-05-13 20:33:02.788118 | orchestrator | 2025-05-13 20:33:02.788128 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-13 20:33:02.788139 | orchestrator | Tuesday 13 May 2025 20:32:48 +0000 (0:00:00.436) 0:00:34.204 *********** 2025-05-13 20:33:02.788150 | orchestrator | 2025-05-13 20:33:02.788160 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-13 20:33:02.788171 | orchestrator | Tuesday 13 May 2025 20:32:49 +0000 (0:00:00.719) 0:00:34.924 *********** 2025-05-13 20:33:02.788181 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:33:02.788192 | orchestrator | 2025-05-13 20:33:02.788203 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-05-13 20:33:02.788214 | orchestrator | Tuesday 13 May 2025 20:32:50 +0000 (0:00:01.167) 0:00:36.091 *********** 2025-05-13 20:33:02.788224 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:33:02.788243 | orchestrator | 2025-05-13 20:33:02.788268 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-13 20:33:02.788294 | orchestrator | Tuesday 13 May 2025 20:32:51 +0000 (0:00:01.162) 0:00:37.254 *********** 2025-05-13 20:33:02.788311 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:33:02.788329 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:33:02.788346 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:33:02.788362 | orchestrator | 2025-05-13 20:33:02.788379 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-05-13 20:33:02.788398 | orchestrator | Tuesday 13 May 2025 20:32:52 +0000 (0:00:01.269) 0:00:38.523 *********** 2025-05-13 20:33:02.788416 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:33:02.788435 | orchestrator | 2025-05-13 20:33:02.788454 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-05-13 20:33:02.788472 | orchestrator | Tuesday 13 May 2025 20:32:54 +0000 (0:00:01.447) 0:00:39.971 *********** 2025-05-13 20:33:02.788490 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-13 20:33:02.788510 | orchestrator | 2025-05-13 20:33:02.788528 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-05-13 20:33:02.788547 | orchestrator | Tuesday 13 May 2025 20:32:57 +0000 (0:00:02.836) 0:00:42.807 *********** 2025-05-13 20:33:02.788565 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:33:02.788584 | orchestrator | 2025-05-13 20:33:02.788604 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-05-13 20:33:02.788622 | orchestrator | Tuesday 13 May 2025 20:32:58 +0000 (0:00:01.033) 0:00:43.841 *********** 2025-05-13 20:33:02.788641 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:33:02.788660 | orchestrator | 2025-05-13 20:33:02.788755 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-05-13 20:33:02.788780 | orchestrator | Tuesday 13 May 2025 20:32:59 +0000 (0:00:01.129) 0:00:44.971 *********** 2025-05-13 20:33:02.788799 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:33:02.788818 | orchestrator | 2025-05-13 20:33:02.788837 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-05-13 20:33:02.788855 | orchestrator | Tuesday 13 May 2025 20:33:00 +0000 (0:00:01.047) 0:00:46.018 *********** 2025-05-13 20:33:02.788874 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:33:02.788893 | orchestrator | 2025-05-13 20:33:02.788911 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-13 20:33:02.788930 | orchestrator | Tuesday 13 May 2025 20:33:01 +0000 (0:00:01.068) 0:00:47.087 *********** 2025-05-13 20:33:02.788963 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:33:02.788982 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:33:02.789001 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:33:02.789020 | orchestrator | 2025-05-13 20:33:02.789037 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-05-13 20:33:02.789070 | orchestrator | Tuesday 13 May 2025 20:33:02 +0000 (0:00:01.254) 0:00:48.342 *********** 2025-05-13 20:33:34.477073 | orchestrator | changed: [testbed-node-4] 2025-05-13 20:33:34.477185 | orchestrator | changed: [testbed-node-3] 2025-05-13 20:33:34.477200 | orchestrator | changed: [testbed-node-5] 2025-05-13 20:33:34.477212 | orchestrator | 2025-05-13 20:33:34.477225 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-05-13 20:33:34.477238 | orchestrator | Tuesday 13 May 2025 20:33:06 +0000 (0:00:03.479) 0:00:51.821 *********** 2025-05-13 20:33:34.477249 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:33:34.477260 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:33:34.477271 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:33:34.477282 | orchestrator | 2025-05-13 20:33:34.477293 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-05-13 20:33:34.477304 | orchestrator | Tuesday 13 May 2025 20:33:07 +0000 (0:00:01.243) 0:00:53.065 *********** 2025-05-13 20:33:34.477315 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:33:34.477326 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:33:34.477337 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:33:34.477349 | orchestrator | 2025-05-13 20:33:34.477367 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-05-13 20:33:34.477384 | orchestrator | Tuesday 13 May 2025 20:33:08 +0000 (0:00:01.318) 0:00:54.383 *********** 2025-05-13 20:33:34.477395 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:33:34.477406 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:33:34.477417 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:33:34.477427 | orchestrator | 2025-05-13 20:33:34.477438 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-05-13 20:33:34.477449 | orchestrator | Tuesday 13 May 2025 20:33:10 +0000 (0:00:01.258) 0:00:55.642 *********** 2025-05-13 20:33:34.477460 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:33:34.477471 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:33:34.477481 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:33:34.477492 | orchestrator | 2025-05-13 20:33:34.477503 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-05-13 20:33:34.477513 | orchestrator | Tuesday 13 May 2025 20:33:11 +0000 (0:00:01.480) 0:00:57.123 *********** 2025-05-13 20:33:34.477524 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:33:34.477535 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:33:34.477545 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:33:34.477556 | orchestrator | 2025-05-13 20:33:34.477567 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-05-13 20:33:34.477578 | orchestrator | Tuesday 13 May 2025 20:33:12 +0000 (0:00:01.299) 0:00:58.422 *********** 2025-05-13 20:33:34.477589 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:33:34.477600 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:33:34.477613 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:33:34.477625 | orchestrator | 2025-05-13 20:33:34.477637 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-13 20:33:34.477650 | orchestrator | Tuesday 13 May 2025 20:33:14 +0000 (0:00:01.230) 0:00:59.653 *********** 2025-05-13 20:33:34.477710 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:33:34.477721 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:33:34.477732 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:33:34.477743 | orchestrator | 2025-05-13 20:33:34.477754 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-05-13 20:33:34.477765 | orchestrator | Tuesday 13 May 2025 20:33:15 +0000 (0:00:01.673) 0:01:01.326 *********** 2025-05-13 20:33:34.477776 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:33:34.477811 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:33:34.477823 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:33:34.477833 | orchestrator | 2025-05-13 20:33:34.477844 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-05-13 20:33:34.477855 | orchestrator | Tuesday 13 May 2025 20:33:17 +0000 (0:00:01.649) 0:01:02.976 *********** 2025-05-13 20:33:34.477866 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:33:34.477877 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:33:34.477887 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:33:34.477898 | orchestrator | 2025-05-13 20:33:34.477909 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-05-13 20:33:34.477919 | orchestrator | Tuesday 13 May 2025 20:33:18 +0000 (0:00:01.288) 0:01:04.264 *********** 2025-05-13 20:33:34.477930 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:33:34.477941 | orchestrator | skipping: [testbed-node-4] 2025-05-13 20:33:34.477952 | orchestrator | skipping: [testbed-node-5] 2025-05-13 20:33:34.477963 | orchestrator | 2025-05-13 20:33:34.477974 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-05-13 20:33:34.477984 | orchestrator | Tuesday 13 May 2025 20:33:19 +0000 (0:00:01.256) 0:01:05.521 *********** 2025-05-13 20:33:34.477995 | orchestrator | ok: [testbed-node-3] 2025-05-13 20:33:34.478006 | orchestrator | ok: [testbed-node-4] 2025-05-13 20:33:34.478073 | orchestrator | ok: [testbed-node-5] 2025-05-13 20:33:34.478086 | orchestrator | 2025-05-13 20:33:34.478097 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-05-13 20:33:34.478108 | orchestrator | Tuesday 13 May 2025 20:33:21 +0000 (0:00:01.343) 0:01:06.864 *********** 2025-05-13 20:33:34.478118 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-13 20:33:34.478129 | orchestrator | 2025-05-13 20:33:34.478154 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-05-13 20:33:34.478165 | orchestrator | Tuesday 13 May 2025 20:33:22 +0000 (0:00:01.177) 0:01:08.042 *********** 2025-05-13 20:33:34.478176 | orchestrator | skipping: [testbed-node-3] 2025-05-13 20:33:34.478208 | orchestrator | 2025-05-13 20:33:34.478220 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-13 20:33:34.478230 | orchestrator | Tuesday 13 May 2025 20:33:23 +0000 (0:00:01.152) 0:01:09.194 *********** 2025-05-13 20:33:34.478241 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-13 20:33:34.478267 | orchestrator | 2025-05-13 20:33:34.478278 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-13 20:33:34.478289 | orchestrator | Tuesday 13 May 2025 20:33:25 +0000 (0:00:02.348) 0:01:11.543 *********** 2025-05-13 20:33:34.478300 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-13 20:33:34.478311 | orchestrator | 2025-05-13 20:33:34.478321 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-13 20:33:34.478332 | orchestrator | Tuesday 13 May 2025 20:33:27 +0000 (0:00:01.149) 0:01:12.692 *********** 2025-05-13 20:33:34.478360 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-13 20:33:34.478372 | orchestrator | 2025-05-13 20:33:34.478383 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-13 20:33:34.478393 | orchestrator | Tuesday 13 May 2025 20:33:28 +0000 (0:00:01.154) 0:01:13.846 *********** 2025-05-13 20:33:34.478404 | orchestrator | 2025-05-13 20:33:34.478428 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-13 20:33:34.478439 | orchestrator | Tuesday 13 May 2025 20:33:28 +0000 (0:00:00.462) 0:01:14.309 *********** 2025-05-13 20:33:34.478449 | orchestrator | 2025-05-13 20:33:34.478460 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-13 20:33:34.478471 | orchestrator | Tuesday 13 May 2025 20:33:29 +0000 (0:00:00.382) 0:01:14.691 *********** 2025-05-13 20:33:34.478481 | orchestrator | 2025-05-13 20:33:34.478492 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-05-13 20:33:34.478502 | orchestrator | Tuesday 13 May 2025 20:33:29 +0000 (0:00:00.765) 0:01:15.457 *********** 2025-05-13 20:33:34.478523 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-13 20:33:34.478534 | orchestrator | 2025-05-13 20:33:34.478544 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-13 20:33:34.478555 | orchestrator | Tuesday 13 May 2025 20:33:32 +0000 (0:00:02.462) 0:01:17.920 *********** 2025-05-13 20:33:34.478565 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-05-13 20:33:34.478576 | orchestrator |  "msg": [ 2025-05-13 20:33:34.478602 | orchestrator |  "Validator run completed.", 2025-05-13 20:33:34.478615 | orchestrator |  "You can find the report file here:", 2025-05-13 20:33:34.478626 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-05-13T20:32:16+00:00-report.json", 2025-05-13 20:33:34.478638 | orchestrator |  "on the following host:", 2025-05-13 20:33:34.478649 | orchestrator |  "testbed-manager" 2025-05-13 20:33:34.478678 | orchestrator |  ] 2025-05-13 20:33:34.478690 | orchestrator | } 2025-05-13 20:33:34.478701 | orchestrator | 2025-05-13 20:33:34.478712 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:33:34.478723 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-05-13 20:33:34.478735 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-13 20:33:34.478746 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-13 20:33:34.478756 | orchestrator | 2025-05-13 20:33:34.478767 | orchestrator | 2025-05-13 20:33:34.478794 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:33:34.478806 | orchestrator | Tuesday 13 May 2025 20:33:34 +0000 (0:00:01.703) 0:01:19.623 *********** 2025-05-13 20:33:34.478817 | orchestrator | =============================================================================== 2025-05-13 20:33:34.478827 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 3.48s 2025-05-13 20:33:34.478838 | orchestrator | Get ceph osd tree ------------------------------------------------------- 2.84s 2025-05-13 20:33:34.478849 | orchestrator | Get timestamp for report file ------------------------------------------- 2.50s 2025-05-13 20:33:34.478859 | orchestrator | Write report file ------------------------------------------------------- 2.46s 2025-05-13 20:33:34.478870 | orchestrator | Aggregate test results step one ----------------------------------------- 2.35s 2025-05-13 20:33:34.478881 | orchestrator | Create report output directory ------------------------------------------ 1.76s 2025-05-13 20:33:34.478891 | orchestrator | Print report file information ------------------------------------------- 1.70s 2025-05-13 20:33:34.478902 | orchestrator | Prepare test data ------------------------------------------------------- 1.67s 2025-05-13 20:33:34.478913 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 1.65s 2025-05-13 20:33:34.478923 | orchestrator | Flush handlers ---------------------------------------------------------- 1.61s 2025-05-13 20:33:34.478934 | orchestrator | Calculate OSD devices for each host ------------------------------------- 1.58s 2025-05-13 20:33:34.478944 | orchestrator | Flush handlers ---------------------------------------------------------- 1.56s 2025-05-13 20:33:34.478955 | orchestrator | Prepare test data ------------------------------------------------------- 1.50s 2025-05-13 20:33:34.478966 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 1.49s 2025-05-13 20:33:34.478977 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 1.48s 2025-05-13 20:33:34.478988 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 1.48s 2025-05-13 20:33:34.478999 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 1.45s 2025-05-13 20:33:34.479010 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 1.43s 2025-05-13 20:33:34.479028 | orchestrator | Set test result to passed if all containers are running ----------------- 1.43s 2025-05-13 20:33:34.479039 | orchestrator | Set test result to failed when count of containers is wrong ------------- 1.42s 2025-05-13 20:33:34.748934 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-05-13 20:33:34.757325 | orchestrator | + set -e 2025-05-13 20:33:34.757421 | orchestrator | + source /opt/manager-vars.sh 2025-05-13 20:33:34.757566 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-13 20:33:34.757729 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-13 20:33:34.757737 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-13 20:33:34.757744 | orchestrator | ++ CEPH_VERSION=reef 2025-05-13 20:33:34.757752 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-13 20:33:34.757847 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-13 20:33:34.757855 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-13 20:33:34.757862 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-13 20:33:34.757870 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-13 20:33:34.757880 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-13 20:33:34.757893 | orchestrator | ++ export ARA=false 2025-05-13 20:33:34.757907 | orchestrator | ++ ARA=false 2025-05-13 20:33:34.757919 | orchestrator | ++ export TEMPEST=false 2025-05-13 20:33:34.757932 | orchestrator | ++ TEMPEST=false 2025-05-13 20:33:34.757946 | orchestrator | ++ export IS_ZUUL=true 2025-05-13 20:33:34.757959 | orchestrator | ++ IS_ZUUL=true 2025-05-13 20:33:34.757972 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.173 2025-05-13 20:33:34.757983 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.173 2025-05-13 20:33:34.757991 | orchestrator | ++ export EXTERNAL_API=false 2025-05-13 20:33:34.757998 | orchestrator | ++ EXTERNAL_API=false 2025-05-13 20:33:34.758058 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-13 20:33:34.758067 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-13 20:33:34.758074 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-13 20:33:34.758082 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-13 20:33:34.758089 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-13 20:33:34.758096 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-13 20:33:34.758103 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-13 20:33:34.758110 | orchestrator | + source /etc/os-release 2025-05-13 20:33:34.758118 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-05-13 20:33:34.758125 | orchestrator | ++ NAME=Ubuntu 2025-05-13 20:33:34.758132 | orchestrator | ++ VERSION_ID=24.04 2025-05-13 20:33:34.758140 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-05-13 20:33:34.758147 | orchestrator | ++ VERSION_CODENAME=noble 2025-05-13 20:33:34.758155 | orchestrator | ++ ID=ubuntu 2025-05-13 20:33:34.758162 | orchestrator | ++ ID_LIKE=debian 2025-05-13 20:33:34.758169 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-05-13 20:33:34.758177 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-05-13 20:33:34.758184 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-05-13 20:33:34.758191 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-05-13 20:33:34.758200 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-05-13 20:33:34.758207 | orchestrator | ++ LOGO=ubuntu-logo 2025-05-13 20:33:34.758214 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-05-13 20:33:34.758223 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-05-13 20:33:34.758231 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-05-13 20:33:34.772189 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-05-13 20:33:56.546011 | orchestrator | 2025-05-13 20:33:56.546183 | orchestrator | # Status of Elasticsearch 2025-05-13 20:33:56.546198 | orchestrator | 2025-05-13 20:33:56.546211 | orchestrator | + pushd /opt/configuration/contrib 2025-05-13 20:33:56.546224 | orchestrator | + echo 2025-05-13 20:33:56.546235 | orchestrator | + echo '# Status of Elasticsearch' 2025-05-13 20:33:56.546246 | orchestrator | + echo 2025-05-13 20:33:56.546257 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-05-13 20:33:56.734492 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-05-13 20:33:56.734862 | orchestrator | 2025-05-13 20:33:56.734891 | orchestrator | # Status of MariaDB 2025-05-13 20:33:56.734904 | orchestrator | 2025-05-13 20:33:56.734916 | orchestrator | + echo 2025-05-13 20:33:56.734927 | orchestrator | + echo '# Status of MariaDB' 2025-05-13 20:33:56.734939 | orchestrator | + echo 2025-05-13 20:33:56.734949 | orchestrator | + MARIADB_USER=root_shard_0 2025-05-13 20:33:56.734961 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-05-13 20:33:56.813197 | orchestrator | Reading package lists... 2025-05-13 20:33:57.130255 | orchestrator | Building dependency tree... 2025-05-13 20:33:57.130609 | orchestrator | Reading state information... 2025-05-13 20:33:57.520058 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-05-13 20:33:57.520169 | orchestrator | bc set to manually installed. 2025-05-13 20:33:57.520206 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 6 not upgraded. 2025-05-13 20:33:58.208438 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-05-13 20:33:58.208540 | orchestrator | 2025-05-13 20:33:58.208550 | orchestrator | # Status of Prometheus 2025-05-13 20:33:58.208557 | orchestrator | 2025-05-13 20:33:58.208564 | orchestrator | + echo 2025-05-13 20:33:58.208570 | orchestrator | + echo '# Status of Prometheus' 2025-05-13 20:33:58.208577 | orchestrator | + echo 2025-05-13 20:33:58.208584 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-05-13 20:33:58.270744 | orchestrator | Unauthorized 2025-05-13 20:33:58.274008 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-05-13 20:33:58.340310 | orchestrator | Unauthorized 2025-05-13 20:33:58.343391 | orchestrator | 2025-05-13 20:33:58.343443 | orchestrator | # Status of RabbitMQ 2025-05-13 20:33:58.343458 | orchestrator | 2025-05-13 20:33:58.343469 | orchestrator | + echo 2025-05-13 20:33:58.343481 | orchestrator | + echo '# Status of RabbitMQ' 2025-05-13 20:33:58.343492 | orchestrator | + echo 2025-05-13 20:33:58.343504 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-05-13 20:33:58.858229 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-05-13 20:33:58.868873 | orchestrator | 2025-05-13 20:33:58.868992 | orchestrator | # Status of Redis 2025-05-13 20:33:58.869015 | orchestrator | 2025-05-13 20:33:58.869032 | orchestrator | + echo 2025-05-13 20:33:58.869050 | orchestrator | + echo '# Status of Redis' 2025-05-13 20:33:58.869067 | orchestrator | + echo 2025-05-13 20:33:58.869086 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-05-13 20:33:58.875045 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002071s;;;0.000000;10.000000 2025-05-13 20:33:58.876135 | orchestrator | + popd 2025-05-13 20:33:58.876189 | orchestrator | + echo 2025-05-13 20:33:58.876205 | orchestrator | 2025-05-13 20:33:58.876220 | orchestrator | # Create backup of MariaDB database 2025-05-13 20:33:58.876235 | orchestrator | 2025-05-13 20:33:58.876249 | orchestrator | + echo '# Create backup of MariaDB database' 2025-05-13 20:33:58.876282 | orchestrator | + echo 2025-05-13 20:33:58.876296 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-05-13 20:34:00.751692 | orchestrator | 2025-05-13 20:34:00 | INFO  | Task 8966ce75-5ab7-4553-8281-c2fe143c5c39 (mariadb_backup) was prepared for execution. 2025-05-13 20:34:00.752546 | orchestrator | 2025-05-13 20:34:00 | INFO  | It takes a moment until task 8966ce75-5ab7-4553-8281-c2fe143c5c39 (mariadb_backup) has been started and output is visible here. 2025-05-13 20:34:04.443605 | orchestrator | 2025-05-13 20:34:04.445907 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:34:04.445961 | orchestrator | 2025-05-13 20:34:04.446936 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:34:04.450366 | orchestrator | Tuesday 13 May 2025 20:34:04 +0000 (0:00:00.185) 0:00:00.185 *********** 2025-05-13 20:34:04.628052 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:34:04.769116 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:34:04.769221 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:34:04.770146 | orchestrator | 2025-05-13 20:34:04.774842 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:34:04.775354 | orchestrator | Tuesday 13 May 2025 20:34:04 +0000 (0:00:00.330) 0:00:00.515 *********** 2025-05-13 20:34:05.339009 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-13 20:34:05.340818 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-13 20:34:05.342971 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-13 20:34:05.344510 | orchestrator | 2025-05-13 20:34:05.345456 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-13 20:34:05.346174 | orchestrator | 2025-05-13 20:34:05.346673 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-13 20:34:05.347379 | orchestrator | Tuesday 13 May 2025 20:34:05 +0000 (0:00:00.571) 0:00:01.086 *********** 2025-05-13 20:34:05.758077 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-13 20:34:05.763835 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-13 20:34:05.766534 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-13 20:34:05.767740 | orchestrator | 2025-05-13 20:34:05.768708 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-13 20:34:05.769492 | orchestrator | Tuesday 13 May 2025 20:34:05 +0000 (0:00:00.417) 0:00:01.503 *********** 2025-05-13 20:34:06.299256 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:34:06.300342 | orchestrator | 2025-05-13 20:34:06.301373 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-05-13 20:34:06.303005 | orchestrator | Tuesday 13 May 2025 20:34:06 +0000 (0:00:00.540) 0:00:02.044 *********** 2025-05-13 20:34:09.403323 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:34:09.403437 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:34:09.405388 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:34:09.406539 | orchestrator | 2025-05-13 20:34:09.408737 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-05-13 20:34:09.409184 | orchestrator | Tuesday 13 May 2025 20:34:09 +0000 (0:00:03.101) 0:00:05.146 *********** 2025-05-13 20:34:35.850967 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-13 20:34:35.851220 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-05-13 20:34:35.851259 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-13 20:34:35.853164 | orchestrator | mariadb_bootstrap_restart 2025-05-13 20:34:35.928799 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:34:35.928948 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:34:35.930147 | orchestrator | changed: [testbed-node-0] 2025-05-13 20:34:35.933458 | orchestrator | 2025-05-13 20:34:35.933492 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-13 20:34:35.933728 | orchestrator | skipping: no hosts matched 2025-05-13 20:34:35.934979 | orchestrator | 2025-05-13 20:34:35.935559 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-13 20:34:35.935985 | orchestrator | skipping: no hosts matched 2025-05-13 20:34:35.936794 | orchestrator | 2025-05-13 20:34:35.937877 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-13 20:34:35.939205 | orchestrator | skipping: no hosts matched 2025-05-13 20:34:35.940071 | orchestrator | 2025-05-13 20:34:35.940211 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-13 20:34:35.940938 | orchestrator | 2025-05-13 20:34:35.941581 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-13 20:34:35.942162 | orchestrator | Tuesday 13 May 2025 20:34:35 +0000 (0:00:26.529) 0:00:31.675 *********** 2025-05-13 20:34:36.144303 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:34:36.285152 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:34:36.285748 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:34:36.287582 | orchestrator | 2025-05-13 20:34:36.288500 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-13 20:34:36.288811 | orchestrator | Tuesday 13 May 2025 20:34:36 +0000 (0:00:00.356) 0:00:32.031 *********** 2025-05-13 20:34:36.688430 | orchestrator | skipping: [testbed-node-0] 2025-05-13 20:34:36.726716 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:34:36.726875 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:34:36.727394 | orchestrator | 2025-05-13 20:34:36.729749 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:34:36.729804 | orchestrator | 2025-05-13 20:34:36 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 20:34:36.729818 | orchestrator | 2025-05-13 20:34:36 | INFO  | Please wait and do not abort execution. 2025-05-13 20:34:36.730110 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-13 20:34:36.731379 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-13 20:34:36.732286 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-13 20:34:36.732798 | orchestrator | 2025-05-13 20:34:36.733434 | orchestrator | 2025-05-13 20:34:36.733811 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:34:36.734295 | orchestrator | Tuesday 13 May 2025 20:34:36 +0000 (0:00:00.442) 0:00:32.474 *********** 2025-05-13 20:34:36.734797 | orchestrator | =============================================================================== 2025-05-13 20:34:36.735146 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 26.53s 2025-05-13 20:34:36.735698 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.10s 2025-05-13 20:34:36.736098 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2025-05-13 20:34:36.736373 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.54s 2025-05-13 20:34:36.736956 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.44s 2025-05-13 20:34:36.737305 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.42s 2025-05-13 20:34:36.738109 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.36s 2025-05-13 20:34:36.738829 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-05-13 20:34:37.330384 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=incremental 2025-05-13 20:34:39.104327 | orchestrator | 2025-05-13 20:34:39 | INFO  | Task e5962ac9-6dce-43a2-997b-76ee79b73d46 (mariadb_backup) was prepared for execution. 2025-05-13 20:34:39.104451 | orchestrator | 2025-05-13 20:34:39 | INFO  | It takes a moment until task e5962ac9-6dce-43a2-997b-76ee79b73d46 (mariadb_backup) has been started and output is visible here. 2025-05-13 20:34:42.949819 | orchestrator | 2025-05-13 20:34:42.952768 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:34:42.953763 | orchestrator | 2025-05-13 20:34:42.954054 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:34:42.957305 | orchestrator | Tuesday 13 May 2025 20:34:42 +0000 (0:00:00.190) 0:00:00.190 *********** 2025-05-13 20:34:43.135571 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:34:43.246919 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:34:43.247071 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:34:43.247088 | orchestrator | 2025-05-13 20:34:43.247102 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:34:43.247115 | orchestrator | Tuesday 13 May 2025 20:34:43 +0000 (0:00:00.299) 0:00:00.490 *********** 2025-05-13 20:34:43.800136 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-13 20:34:43.800450 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-13 20:34:43.804396 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-13 20:34:43.804791 | orchestrator | 2025-05-13 20:34:43.805353 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-13 20:34:43.805861 | orchestrator | 2025-05-13 20:34:43.806742 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-13 20:34:43.806774 | orchestrator | Tuesday 13 May 2025 20:34:43 +0000 (0:00:00.552) 0:00:01.043 *********** 2025-05-13 20:34:44.202822 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-13 20:34:44.209957 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-13 20:34:44.229103 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-13 20:34:44.251746 | orchestrator | 2025-05-13 20:34:44.252277 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-13 20:34:44.252798 | orchestrator | Tuesday 13 May 2025 20:34:44 +0000 (0:00:00.396) 0:00:01.439 *********** 2025-05-13 20:34:44.715818 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:34:44.719127 | orchestrator | 2025-05-13 20:34:44.719245 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-05-13 20:34:44.719285 | orchestrator | Tuesday 13 May 2025 20:34:44 +0000 (0:00:00.520) 0:00:01.960 *********** 2025-05-13 20:34:47.828550 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:34:47.829814 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:34:47.830853 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:34:47.835208 | orchestrator | 2025-05-13 20:34:47.836182 | orchestrator | TASK [mariadb : Taking incremental database backup via Mariabackup] ************ 2025-05-13 20:34:47.836843 | orchestrator | Tuesday 13 May 2025 20:34:47 +0000 (0:00:03.109) 0:00:05.069 *********** 2025-05-13 20:34:52.773928 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:34:52.774148 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:34:52.777227 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "Container exited with non-zero return code 139", "rc": 139, "stderr": "INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json\nINFO:__main__:Validating config file\nINFO:__main__:Kolla config strategy set to: COPY_ALWAYS\nINFO:__main__:Copying /etc/mysql/my.cnf to /etc/kolla/defaults/etc/mysql/my.cnf\nINFO:__main__:Copying permissions from /etc/mysql/my.cnf onto /etc/kolla/defaults/etc/mysql/my.cnf\nINFO:__main__:Copying service configuration files\nINFO:__main__:Deleting /etc/mysql/my.cnf\nINFO:__main__:Copying /var/lib/kolla/config_files/my.cnf to /etc/mysql/my.cnf\nINFO:__main__:Setting permission for /etc/mysql/my.cnf\nINFO:__main__:Writing out command to execute\nINFO:__main__:Setting permission for /var/log/kolla/mariadb\nINFO:__main__:Setting permission for /backup\n[00] 2025-05-13 20:34:51 Connecting to MariaDB server host: 192.168.16.12, user: backup_shard_0, password: set, port: 3306, socket: not set\n[00] 2025-05-13 20:34:51 Using server version 10.11.12-MariaDB-deb12-log\nmariabackup based on MariaDB server 10.11.12-MariaDB debian-linux-gnu (x86_64)\n[00] 2025-05-13 20:34:51 incremental backup from 0 is enabled.\n[00] 2025-05-13 20:34:51 uses posix_fadvise().\n[00] 2025-05-13 20:34:51 cd to /var/lib/mysql/\n[00] 2025-05-13 20:34:51 open files limit requested 0, set to 1048576\n[00] 2025-05-13 20:34:51 mariabackup: using the following InnoDB configuration:\n[00] 2025-05-13 20:34:51 innodb_data_home_dir = \n[00] 2025-05-13 20:34:51 innodb_data_file_path = ibdata1:12M:autoextend\n[00] 2025-05-13 20:34:51 innodb_log_group_home_dir = ./\n[00] 2025-05-13 20:34:51 InnoDB: Using liburing\n2025-05-13 20:34:51 0 [Note] InnoDB: Number of transaction pools: 1\nmariabackup: io_uring_queue_init() failed with EPERM: sysctl kernel.io_uring_disabled has the value 2, or 1 and the user of the process is not a member of sysctl kernel.io_uring_group. (see man 2 io_uring_setup).\n2025-05-13 20:34:51 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF\n2025-05-13 20:34:51 0 [Note] InnoDB: Memory-mapped log (block size=512 bytes)\n250513 20:34:51 [ERROR] mariabackup got signal 11 ;\nSorry, we probably made a mistake, and this is a bug.\n\nYour assistance in bug reporting will enable us to fix this for the next release.\nTo report this bug, see https://mariadb.com/kb/en/reporting-bugs about how to report\na bug on https://jira.mariadb.org/.\n\nPlease include the information from the server start above, to the end of the\ninformation below.\n\nServer version: 10.11.12-MariaDB-deb12 source revision: cafd22db7970ce081bafd887359aa0a77cfb769d\n\nThe information page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mariadbd/\ncontains instructions to obtain a better version of the backtrace below.\nFollowing these instructions will help MariaDB developers provide a fix quicker.\n\nAttempting backtrace. Include this in the bug report.\n(note: Retrieving this information may fail)\n\nThread pointer: 0x0\nstack_bottom = 0x0 thread_stack 0x49000\nPrinting to addr2line failed\nmariabackup(my_print_stacktrace+0x2e)[0x5e0c69fe339e]\nmariabackup(handle_fatal_signal+0x229)[0x5e0c69b06689]\n/lib/x86_64-linux-gnu/libc.so.6(+0x3c050)[0x75bd64534050]\nmariabackup(server_mysql_fetch_row+0x14)[0x5e0c69752424]\nmariabackup(+0x76ca37)[0x5e0c69724a37]\nmariabackup(+0x75f32a)[0x5e0c6971732a]\nmariabackup(main+0x163)[0x5e0c696bc003]\n/lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x75bd6451f24a]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x75bd6451f305]\nmariabackup(_start+0x21)[0x5e0c69701111]\nWriting a core file...\nWorking directory at /var/lib/mysql\nResource Limits (excludes unlimited resources):\nLimit Soft Limit Hard Limit Units \nMax stack size 8388608 unlimited bytes \nMax open files 1048576 1048576 files \nMax locked memory 8388608 8388608 bytes \nMax pending signals 128077 128077 signals \nMax msgqueue size 819200 819200 bytes \nMax nice priority 0 0 \nMax realtime priority 0 0 \nCore pattern: |/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E\n\nKernel version: Linux version 6.11.0-25-generic (buildd@lcy02-amd64-027) (x86_64-linux-gnu-gcc-13 (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, GNU ld (GNU Binutils for Ubuntu) 2.42) #25~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Apr 15 17:20:50 UTC 2\n\n/usr/local/bin/kolla_mariadb_backup_replica.sh: line 36: 44 Segmentation fault (core dumped) mariabackup --defaults-file=\"${REPLICA_MY_CNF}\" --backup --stream=mbstream --incremental-history-name=\"${LAST_FULL_DATE}\" --history=\"${LAST_FULL_DATE}\"\n 45 Done | gzip > \"${BACKUP_DIR}/incremental-$(date +%H)-mysqlbackup-${LAST_FULL_DATE}.qp.mbc.mbs.gz\"\n", "stderr_lines": ["INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", "INFO:__main__:Validating config file", "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", "INFO:__main__:Copying /etc/mysql/my.cnf to /etc/kolla/defaults/etc/mysql/my.cnf", "INFO:__main__:Copying permissions from /etc/mysql/my.cnf onto /etc/kolla/defaults/etc/mysql/my.cnf", "INFO:__main__:Copying service configuration files", "INFO:__main__:Deleting /etc/mysql/my.cnf", "INFO:__main__:Copying /var/lib/kolla/config_files/my.cnf to /etc/mysql/my.cnf", "INFO:__main__:Setting permission for /etc/mysql/my.cnf", "INFO:__main__:Writing out command to execute", "INFO:__main__:Setting permission for /var/log/kolla/mariadb", "INFO:__main__:Setting permission for /backup", "[00] 2025-05-13 20:34:51 Connecting to MariaDB server host: 192.168.16.12, user: backup_shard_0, password: set, port: 3306, socket: not set", "[00] 2025-05-13 20:34:51 Using server version 10.11.12-MariaDB-deb12-log", "mariabackup based on MariaDB server 10.11.12-MariaDB debian-linux-gnu (x86_64)", "[00] 2025-05-13 20:34:51 incremental backup from 0 is enabled.", "[00] 2025-05-13 20:34:51 uses posix_fadvise().", "[00] 2025-05-13 20:34:51 cd to /var/lib/mysql/", "[00] 2025-05-13 20:34:51 open files limit requested 0, set to 1048576", "[00] 2025-05-13 20:34:51 mariabackup: using the following InnoDB configuration:", "[00] 2025-05-13 20:34:51 innodb_data_home_dir = ", "[00] 2025-05-13 20:34:51 innodb_data_file_path = ibdata1:12M:autoextend", "[00] 2025-05-13 20:34:51 innodb_log_group_home_dir = ./", "[00] 2025-05-13 20:34:51 InnoDB: Using liburing", "2025-05-13 20:34:51 0 [Note] InnoDB: Number of transaction pools: 1", "mariabackup: io_uring_queue_init() failed with EPERM: sysctl kernel.io_uring_disabled has the value 2, or 1 and the user of the process is not a member of sysctl kernel.io_uring_group. (see man 2 io_uring_setup).", "2025-05-13 20:34:51 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF", "2025-05-13 20:34:51 0 [Note] InnoDB: Memory-mapped log (block size=512 bytes)", "250513 20:34:51 [ERROR] mariabackup got signal 11 ;", "Sorry, we probably made a mistake, and this is a bug.", "", "Your assistance in bug reporting will enable us to fix this for the next release.", "To report this bug, see https://mariadb.com/kb/en/reporting-bugs about how to report", "a bug on https://jira.mariadb.org/.", "", "Please include the information from the server start above, to the end of the", "information below.", "", "Server version: 10.11.12-MariaDB-deb12 source revision: cafd22db7970ce081bafd887359aa0a77cfb769d", "", "The information page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mariadbd/", "contains instructions to obtain a better version of the backtrace below.", "Following these instructions will help MariaDB developers provide a fix quicker.", "", "Attempting backtrace. Include this in the bug report.", "(note: Retrieving this information may fail)", "", "Thread pointer: 0x0", "stack_bottom = 0x0 thread_stack 0x49000", "Printing to addr2line failed", "mariabackup(my_print_stacktrace+0x2e)[0x5e0c69fe339e]", "mariabackup(handle_fatal_signal+0x229)[0x5e0c69b06689]", "/lib/x86_64-linux-gnu/libc.so.6(+0x3c050)[0x75bd64534050]", "mariabackup(server_mysql_fetch_row+0x14)[0x5e0c69752424]", "mariabackup(+0x76ca37)[0x5e0c69724a37]", "mariabackup(+0x75f32a)[0x5e0c6971732a]", "mariabackup(main+0x163)[0x5e0c696bc003]", "/lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x75bd6451f24a]", "/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x75bd6451f305]", "mariabackup(_start+0x21)[0x5e0c69701111]", "Writing a core file...", "Working directory at /var/lib/mysql", "Resource Limits (excludes unlimited resources):", "Limit Soft Limit Hard Limit Units ", "Max stack size 8388608 unlimited bytes ", "Max open files 1048576 1048576 files ", "Max locked memory 8388608 8388608 bytes ", "Max pending signals 128077 128077 signals ", "Max msgqueue size 819200 819200 bytes ", "Max nice priority 0 0 ", "Max realtime priority 0 0 ", "Core pattern: |/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E", "", "Kernel version: Linux version 6.11.0-25-generic (buildd@lcy02-amd64-027) (x86_64-linux-gnu-gcc-13 (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, GNU ld (GNU Binutils for Ubuntu) 2.42) #25~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Apr 15 17:20:50 UTC 2", "", "/usr/local/bin/kolla_mariadb_backup_replica.sh: line 36: 44 Segmentation fault (core dumped) mariabackup --defaults-file=\"${REPLICA_MY_CNF}\" --backup --stream=mbstream --incremental-history-name=\"${LAST_FULL_DATE}\" --history=\"${LAST_FULL_DATE}\"", " 45 Done | gzip > \"${BACKUP_DIR}/incremental-$(date +%H)-mysqlbackup-${LAST_FULL_DATE}.qp.mbc.mbs.gz\""], "stdout": "Taking an incremental backup\n", "stdout_lines": ["Taking an incremental backup"]} 2025-05-13 20:34:52.963326 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-13 20:34:52.964553 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-05-13 20:34:52.964666 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-13 20:34:52.967369 | orchestrator | mariadb_bootstrap_restart 2025-05-13 20:34:53.044648 | orchestrator | 2025-05-13 20:34:53.045612 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-13 20:34:53.047891 | orchestrator | skipping: no hosts matched 2025-05-13 20:34:53.048012 | orchestrator | 2025-05-13 20:34:53.049235 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-13 20:34:53.050661 | orchestrator | skipping: no hosts matched 2025-05-13 20:34:53.051134 | orchestrator | 2025-05-13 20:34:53.053524 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-13 20:34:53.055241 | orchestrator | skipping: no hosts matched 2025-05-13 20:34:53.055389 | orchestrator | 2025-05-13 20:34:53.055415 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-13 20:34:53.056491 | orchestrator | 2025-05-13 20:34:53.057186 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-13 20:34:53.057670 | orchestrator | Tuesday 13 May 2025 20:34:53 +0000 (0:00:05.218) 0:00:10.287 *********** 2025-05-13 20:34:53.250508 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:34:53.250913 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:34:53.251396 | orchestrator | 2025-05-13 20:34:53.252147 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-13 20:34:53.252657 | orchestrator | Tuesday 13 May 2025 20:34:53 +0000 (0:00:00.205) 0:00:10.493 *********** 2025-05-13 20:34:53.381934 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:34:53.382676 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:34:53.383868 | orchestrator | 2025-05-13 20:34:53.384956 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:34:53.385155 | orchestrator | 2025-05-13 20:34:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 20:34:53.385201 | orchestrator | 2025-05-13 20:34:53 | INFO  | Please wait and do not abort execution. 2025-05-13 20:34:53.386798 | orchestrator | testbed-node-0 : ok=5  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-13 20:34:53.387716 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-13 20:34:53.388314 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-13 20:34:53.388896 | orchestrator | 2025-05-13 20:34:53.389503 | orchestrator | 2025-05-13 20:34:53.389922 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:34:53.390658 | orchestrator | Tuesday 13 May 2025 20:34:53 +0000 (0:00:00.130) 0:00:10.624 *********** 2025-05-13 20:34:53.391213 | orchestrator | =============================================================================== 2025-05-13 20:34:53.392354 | orchestrator | mariadb : Taking incremental database backup via Mariabackup ------------ 5.22s 2025-05-13 20:34:53.393357 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.11s 2025-05-13 20:34:53.396536 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2025-05-13 20:34:53.398112 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.52s 2025-05-13 20:34:53.398170 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.40s 2025-05-13 20:34:53.400232 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-05-13 20:34:53.400924 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.21s 2025-05-13 20:34:53.402146 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.13s 2025-05-13 20:34:53.774245 | orchestrator | 2025-05-13 20:34:53 | INFO  | Task e6682ec0-7dc0-4739-8f37-47253d3f505c (mariadb_backup) was prepared for execution. 2025-05-13 20:34:53.774390 | orchestrator | 2025-05-13 20:34:53 | INFO  | It takes a moment until task e6682ec0-7dc0-4739-8f37-47253d3f505c (mariadb_backup) has been started and output is visible here. 2025-05-13 20:34:57.690492 | orchestrator | 2025-05-13 20:34:57.691962 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-13 20:34:57.694092 | orchestrator | 2025-05-13 20:34:57.696018 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-13 20:34:57.696076 | orchestrator | Tuesday 13 May 2025 20:34:57 +0000 (0:00:00.186) 0:00:00.186 *********** 2025-05-13 20:34:57.908656 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:34:58.038200 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:34:58.038415 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:34:58.038999 | orchestrator | 2025-05-13 20:34:58.040356 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-13 20:34:58.040787 | orchestrator | Tuesday 13 May 2025 20:34:58 +0000 (0:00:00.351) 0:00:00.537 *********** 2025-05-13 20:34:58.620098 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-13 20:34:58.620396 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-13 20:34:58.622566 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-13 20:34:58.624183 | orchestrator | 2025-05-13 20:34:58.625648 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-13 20:34:58.627290 | orchestrator | 2025-05-13 20:34:58.628160 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-13 20:34:58.628827 | orchestrator | Tuesday 13 May 2025 20:34:58 +0000 (0:00:00.580) 0:00:01.118 *********** 2025-05-13 20:34:59.091992 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-13 20:34:59.092101 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-13 20:34:59.092429 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-13 20:34:59.094923 | orchestrator | 2025-05-13 20:34:59.095746 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-13 20:34:59.096529 | orchestrator | Tuesday 13 May 2025 20:34:59 +0000 (0:00:00.469) 0:00:01.587 *********** 2025-05-13 20:34:59.649928 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-13 20:34:59.650129 | orchestrator | 2025-05-13 20:34:59.650270 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-05-13 20:34:59.651451 | orchestrator | Tuesday 13 May 2025 20:34:59 +0000 (0:00:00.561) 0:00:02.149 *********** 2025-05-13 20:35:02.890549 | orchestrator | ok: [testbed-node-0] 2025-05-13 20:35:02.890745 | orchestrator | ok: [testbed-node-1] 2025-05-13 20:35:02.891164 | orchestrator | ok: [testbed-node-2] 2025-05-13 20:35:02.891357 | orchestrator | 2025-05-13 20:35:02.892495 | orchestrator | TASK [mariadb : Taking incremental database backup via Mariabackup] ************ 2025-05-13 20:35:02.892554 | orchestrator | Tuesday 13 May 2025 20:35:02 +0000 (0:00:03.237) 0:00:05.386 *********** 2025-05-13 20:35:07.463286 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:35:07.463768 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:35:07.466694 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "Container exited with non-zero return code 139", "rc": 139, "stderr": "INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json\nINFO:__main__:Validating config file\nINFO:__main__:Kolla config strategy set to: COPY_ALWAYS\nINFO:__main__:Copying /etc/mysql/my.cnf to /etc/kolla/defaults/etc/mysql/my.cnf\nINFO:__main__:Copying permissions from /etc/mysql/my.cnf onto /etc/kolla/defaults/etc/mysql/my.cnf\nINFO:__main__:Copying service configuration files\nINFO:__main__:Deleting /etc/mysql/my.cnf\nINFO:__main__:Copying /var/lib/kolla/config_files/my.cnf to /etc/mysql/my.cnf\nINFO:__main__:Setting permission for /etc/mysql/my.cnf\nINFO:__main__:Writing out command to execute\nINFO:__main__:Setting permission for /var/log/kolla/mariadb\nINFO:__main__:Setting permission for /backup\n[00] 2025-05-13 20:35:06 Connecting to MariaDB server host: 192.168.16.12, user: backup_shard_0, password: set, port: 3306, socket: not set\n[00] 2025-05-13 20:35:06 Using server version 10.11.12-MariaDB-deb12-log\nmariabackup based on MariaDB server 10.11.12-MariaDB debian-linux-gnu (x86_64)\n[00] 2025-05-13 20:35:06 incremental backup from 0 is enabled.\n[00] 2025-05-13 20:35:06 uses posix_fadvise().\n[00] 2025-05-13 20:35:06 cd to /var/lib/mysql/\n[00] 2025-05-13 20:35:06 open files limit requested 0, set to 1048576\n[00] 2025-05-13 20:35:06 mariabackup: using the following InnoDB configuration:\n[00] 2025-05-13 20:35:06 innodb_data_home_dir = \n[00] 2025-05-13 20:35:06 innodb_data_file_path = ibdata1:12M:autoextend\n[00] 2025-05-13 20:35:06 innodb_log_group_home_dir = ./\n[00] 2025-05-13 20:35:06 InnoDB: Using liburing\n2025-05-13 20:35:06 0 [Note] InnoDB: Number of transaction pools: 1\nmariabackup: io_uring_queue_init() failed with EPERM: sysctl kernel.io_uring_disabled has the value 2, or 1 and the user of the process is not a member of sysctl kernel.io_uring_group. (see man 2 io_uring_setup).\n2025-05-13 20:35:06 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF\n2025-05-13 20:35:06 0 [Note] InnoDB: Memory-mapped log (block size=512 bytes)\n250513 20:35:06 [ERROR] mariabackup got signal 11 ;\nSorry, we probably made a mistake, and this is a bug.\n\nYour assistance in bug reporting will enable us to fix this for the next release.\nTo report this bug, see https://mariadb.com/kb/en/reporting-bugs about how to report\na bug on https://jira.mariadb.org/.\n\nPlease include the information from the server start above, to the end of the\ninformation below.\n\nServer version: 10.11.12-MariaDB-deb12 source revision: cafd22db7970ce081bafd887359aa0a77cfb769d\n\nThe information page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mariadbd/\ncontains instructions to obtain a better version of the backtrace below.\nFollowing these instructions will help MariaDB developers provide a fix quicker.\n\nAttempting backtrace. Include this in the bug report.\n(note: Retrieving this information may fail)\n\nThread pointer: 0x0\nstack_bottom = 0x0 thread_stack 0x49000\nPrinting to addr2line failed\nmariabackup(my_print_stacktrace+0x2e)[0x5e70a023339e]\nmariabackup(handle_fatal_signal+0x229)[0x5e709fd56689]\n/lib/x86_64-linux-gnu/libc.so.6(+0x3c050)[0x797beb364050]\nmariabackup(server_mysql_fetch_row+0x14)[0x5e709f9a2424]\nmariabackup(+0x76ca37)[0x5e709f974a37]\nmariabackup(+0x75f32a)[0x5e709f96732a]\nmariabackup(main+0x163)[0x5e709f90c003]\n/lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x797beb34f24a]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x797beb34f305]\nmariabackup(_start+0x21)[0x5e709f951111]\nWriting a core file...\nWorking directory at /var/lib/mysql\nResource Limits (excludes unlimited resources):\nLimit Soft Limit Hard Limit Units \nMax stack size 8388608 unlimited bytes \nMax open files 1048576 1048576 files \nMax locked memory 8388608 8388608 bytes \nMax pending signals 128077 128077 signals \nMax msgqueue size 819200 819200 bytes \nMax nice priority 0 0 \nMax realtime priority 0 0 \nCore pattern: |/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E\n\nKernel version: Linux version 6.11.0-25-generic (buildd@lcy02-amd64-027) (x86_64-linux-gnu-gcc-13 (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, GNU ld (GNU Binutils for Ubuntu) 2.42) #25~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Apr 15 17:20:50 UTC 2\n\n/usr/local/bin/kolla_mariadb_backup_replica.sh: line 36: 44 Segmentation fault (core dumped) mariabackup --defaults-file=\"${REPLICA_MY_CNF}\" --backup --stream=mbstream --incremental-history-name=\"${LAST_FULL_DATE}\" --history=\"${LAST_FULL_DATE}\"\n 45 Done | gzip > \"${BACKUP_DIR}/incremental-$(date +%H)-mysqlbackup-${LAST_FULL_DATE}.qp.mbc.mbs.gz\"\n", "stderr_lines": ["INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", "INFO:__main__:Validating config file", "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", "INFO:__main__:Copying /etc/mysql/my.cnf to /etc/kolla/defaults/etc/mysql/my.cnf", "INFO:__main__:Copying permissions from /etc/mysql/my.cnf onto /etc/kolla/defaults/etc/mysql/my.cnf", "INFO:__main__:Copying service configuration files", "INFO:__main__:Deleting /etc/mysql/my.cnf", "INFO:__main__:Copying /var/lib/kolla/config_files/my.cnf to /etc/mysql/my.cnf", "INFO:__main__:Setting permission for /etc/mysql/my.cnf", "INFO:__main__:Writing out command to execute", "INFO:__main__:Setting permission for /var/log/kolla/mariadb", "INFO:__main__:Setting permission for /backup", "[00] 2025-05-13 20:35:06 Connecting to MariaDB server host: 192.168.16.12, user: backup_shard_0, password: set, port: 3306, socket: not set", "[00] 2025-05-13 20:35:06 Using server version 10.11.12-MariaDB-deb12-log", "mariabackup based on MariaDB server 10.11.12-MariaDB debian-linux-gnu (x86_64)", "[00] 2025-05-13 20:35:06 incremental backup from 0 is enabled.", "[00] 2025-05-13 20:35:06 uses posix_fadvise().", "[00] 2025-05-13 20:35:06 cd to /var/lib/mysql/", "[00] 2025-05-13 20:35:06 open files limit requested 0, set to 1048576", "[00] 2025-05-13 20:35:06 mariabackup: using the following InnoDB configuration:", "[00] 2025-05-13 20:35:06 innodb_data_home_dir = ", "[00] 2025-05-13 20:35:06 innodb_data_file_path = ibdata1:12M:autoextend", "[00] 2025-05-13 20:35:06 innodb_log_group_home_dir = ./", "[00] 2025-05-13 20:35:06 InnoDB: Using liburing", "2025-05-13 20:35:06 0 [Note] InnoDB: Number of transaction pools: 1", "mariabackup: io_uring_queue_init() failed with EPERM: sysctl kernel.io_uring_disabled has the value 2, or 1 and the user of the process is not a member of sysctl kernel.io_uring_group. (see man 2 io_uring_setup).", "2025-05-13 20:35:06 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF", "2025-05-13 20:35:06 0 [Note] InnoDB: Memory-mapped log (block size=512 bytes)", "250513 20:35:06 [ERROR] mariabackup got signal 11 ;", "Sorry, we probably made a mistake, and this is a bug.", "", "Your assistance in bug reporting will enable us to fix this for the next release.", "To report this bug, see https://mariadb.com/kb/en/reporting-bugs about how to report", "a bug on https://jira.mariadb.org/.", "", "Please include the information from the server start above, to the end of the", "information below.", "", "Server version: 10.11.12-MariaDB-deb12 source revision: cafd22db7970ce081bafd887359aa0a77cfb769d", "", "The information page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mariadbd/", "contains instructions to obtain a better version of the backtrace below.", "Following these instructions will help MariaDB developers provide a fix quicker.", "", "Attempting backtrace. Include this in the bug report.", "(note: Retrieving this information may fail)", "", "Thread pointer: 0x0", "stack_bottom = 0x0 thread_stack 0x49000", "Printing to addr2line failed", "mariabackup(my_print_stacktrace+0x2e)[0x5e70a023339e]", "mariabackup(handle_fatal_signal+0x229)[0x5e709fd56689]", "/lib/x86_64-linux-gnu/libc.so.6(+0x3c050)[0x797beb364050]", "mariabackup(server_mysql_fetch_row+0x14)[0x5e709f9a2424]", "mariabackup(+0x76ca37)[0x5e709f974a37]", "mariabackup(+0x75f32a)[0x5e709f96732a]", "mariabackup(main+0x163)[0x5e709f90c003]", "/lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x797beb34f24a]", "/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x797beb34f305]", "mariabackup(_start+0x21)[0x5e709f951111]", "Writing a core file...", "Working directory at /var/lib/mysql", "Resource Limits (excludes unlimited resources):", "Limit Soft Limit Hard Limit Units ", "Max stack size 8388608 unlimited bytes ", "Max open files 1048576 1048576 files ", "Max locked memory 8388608 8388608 bytes ", "Max pending signals 128077 128077 signals ", "Max msgqueue size 819200 819200 bytes ", "Max nice priority 0 0 ", "Max realtime priority 0 0 ", "Core pattern: |/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E", "", "Kernel version: Linux version 6.11.0-25-generic (buildd@lcy02-amd64-027) (x86_64-linux-gnu-gcc-13 (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, GNU ld (GNU Binutils for Ubuntu) 2.42) #25~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Apr 15 17:20:50 UTC 2", "", "/usr/local/bin/kolla_mariadb_backup_replica.sh: line 36: 44 Segmentation fault (core dumped) mariabackup --defaults-file=\"${REPLICA_MY_CNF}\" --backup --stream=mbstream --incremental-history-name=\"${LAST_FULL_DATE}\" --history=\"${LAST_FULL_DATE}\"", " 45 Done | gzip > \"${BACKUP_DIR}/incremental-$(date +%H)-mysqlbackup-${LAST_FULL_DATE}.qp.mbc.mbs.gz\""], "stdout": "Taking an incremental backup\n", "stdout_lines": ["Taking an incremental backup"]} 2025-05-13 20:35:07.624286 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-13 20:35:07.625706 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-05-13 20:35:07.626975 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-13 20:35:07.628724 | orchestrator | mariadb_bootstrap_restart 2025-05-13 20:35:07.702248 | orchestrator | 2025-05-13 20:35:07.704131 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-13 20:35:07.708115 | orchestrator | skipping: no hosts matched 2025-05-13 20:35:07.708831 | orchestrator | 2025-05-13 20:35:07.711662 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-13 20:35:07.712647 | orchestrator | skipping: no hosts matched 2025-05-13 20:35:07.714722 | orchestrator | 2025-05-13 20:35:07.714854 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-13 20:35:07.716204 | orchestrator | skipping: no hosts matched 2025-05-13 20:35:07.717418 | orchestrator | 2025-05-13 20:35:07.719330 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-13 20:35:07.719372 | orchestrator | 2025-05-13 20:35:07.723393 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-13 20:35:07.724865 | orchestrator | Tuesday 13 May 2025 20:35:07 +0000 (0:00:04.814) 0:00:10.200 *********** 2025-05-13 20:35:07.921039 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:35:07.926544 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:35:07.927118 | orchestrator | 2025-05-13 20:35:07.929203 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-13 20:35:07.930149 | orchestrator | Tuesday 13 May 2025 20:35:07 +0000 (0:00:00.219) 0:00:10.420 *********** 2025-05-13 20:35:08.052169 | orchestrator | skipping: [testbed-node-1] 2025-05-13 20:35:08.052700 | orchestrator | skipping: [testbed-node-2] 2025-05-13 20:35:08.053172 | orchestrator | 2025-05-13 20:35:08.056044 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-13 20:35:08.056114 | orchestrator | 2025-05-13 20:35:08 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-13 20:35:08.056127 | orchestrator | 2025-05-13 20:35:08 | INFO  | Please wait and do not abort execution. 2025-05-13 20:35:08.056606 | orchestrator | testbed-node-0 : ok=5  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-13 20:35:08.057256 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-13 20:35:08.057795 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-13 20:35:08.058537 | orchestrator | 2025-05-13 20:35:08.059008 | orchestrator | 2025-05-13 20:35:08.060095 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-13 20:35:08.060300 | orchestrator | Tuesday 13 May 2025 20:35:08 +0000 (0:00:00.132) 0:00:10.553 *********** 2025-05-13 20:35:08.061209 | orchestrator | =============================================================================== 2025-05-13 20:35:08.061529 | orchestrator | mariadb : Taking incremental database backup via Mariabackup ------------ 4.81s 2025-05-13 20:35:08.061844 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.24s 2025-05-13 20:35:08.062349 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2025-05-13 20:35:08.062847 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.56s 2025-05-13 20:35:08.063223 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.47s 2025-05-13 20:35:08.063540 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2025-05-13 20:35:08.064204 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.22s 2025-05-13 20:35:08.064514 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.13s 2025-05-13 20:35:08.800613 | orchestrator | ERROR 2025-05-13 20:35:08.800960 | orchestrator | { 2025-05-13 20:35:08.801029 | orchestrator | "delta": "0:05:43.902727", 2025-05-13 20:35:08.801070 | orchestrator | "end": "2025-05-13 20:35:08.708998", 2025-05-13 20:35:08.801104 | orchestrator | "msg": "non-zero return code", 2025-05-13 20:35:08.801135 | orchestrator | "rc": 2, 2025-05-13 20:35:08.801166 | orchestrator | "start": "2025-05-13 20:29:24.806271" 2025-05-13 20:35:08.801196 | orchestrator | } failure 2025-05-13 20:35:08.834311 | 2025-05-13 20:35:08.834414 | PLAY RECAP 2025-05-13 20:35:08.834473 | orchestrator | ok: 23 changed: 10 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-05-13 20:35:08.834499 | 2025-05-13 20:35:09.047195 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-05-13 20:35:09.050007 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-13 20:35:09.790950 | 2025-05-13 20:35:09.791123 | PLAY [Post output play] 2025-05-13 20:35:09.807259 | 2025-05-13 20:35:09.807401 | LOOP [stage-output : Register sources] 2025-05-13 20:35:09.884504 | 2025-05-13 20:35:09.884936 | TASK [stage-output : Check sudo] 2025-05-13 20:35:10.772867 | orchestrator | sudo: a password is required 2025-05-13 20:35:10.932296 | orchestrator | ok: Runtime: 0:00:00.018865 2025-05-13 20:35:10.946756 | 2025-05-13 20:35:10.946965 | LOOP [stage-output : Set source and destination for files and folders] 2025-05-13 20:35:10.979834 | 2025-05-13 20:35:10.980034 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-05-13 20:35:11.067521 | orchestrator | ok 2025-05-13 20:35:11.076556 | 2025-05-13 20:35:11.076763 | LOOP [stage-output : Ensure target folders exist] 2025-05-13 20:35:11.591337 | orchestrator | ok: "docs" 2025-05-13 20:35:11.591585 | 2025-05-13 20:35:11.860442 | orchestrator | ok: "artifacts" 2025-05-13 20:35:12.116477 | orchestrator | ok: "logs" 2025-05-13 20:35:12.127088 | 2025-05-13 20:35:12.127233 | LOOP [stage-output : Copy files and folders to staging folder] 2025-05-13 20:35:12.158539 | 2025-05-13 20:35:12.158775 | TASK [stage-output : Make all log files readable] 2025-05-13 20:35:12.472521 | orchestrator | ok 2025-05-13 20:35:12.482677 | 2025-05-13 20:35:12.482851 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-05-13 20:35:12.518433 | orchestrator | skipping: Conditional result was False 2025-05-13 20:35:12.535852 | 2025-05-13 20:35:12.536019 | TASK [stage-output : Discover log files for compression] 2025-05-13 20:35:12.560655 | orchestrator | skipping: Conditional result was False 2025-05-13 20:35:12.574792 | 2025-05-13 20:35:12.574982 | LOOP [stage-output : Archive everything from logs] 2025-05-13 20:35:12.619331 | 2025-05-13 20:35:12.619496 | PLAY [Post cleanup play] 2025-05-13 20:35:12.627330 | 2025-05-13 20:35:12.627433 | TASK [Set cloud fact (Zuul deployment)] 2025-05-13 20:35:12.685967 | orchestrator | ok 2025-05-13 20:35:12.699557 | 2025-05-13 20:35:12.699798 | TASK [Set cloud fact (local deployment)] 2025-05-13 20:35:12.734645 | orchestrator | skipping: Conditional result was False 2025-05-13 20:35:12.750168 | 2025-05-13 20:35:12.750324 | TASK [Clean the cloud environment] 2025-05-13 20:35:13.364223 | orchestrator | 2025-05-13 20:35:13 - clean up servers 2025-05-13 20:35:14.255513 | orchestrator | 2025-05-13 20:35:14 - testbed-manager 2025-05-13 20:35:15.414943 | orchestrator | 2025-05-13 20:35:15 - testbed-node-0 2025-05-13 20:35:15.504297 | orchestrator | 2025-05-13 20:35:15 - testbed-node-3 2025-05-13 20:35:15.596981 | orchestrator | 2025-05-13 20:35:15 - testbed-node-5 2025-05-13 20:35:15.706110 | orchestrator | 2025-05-13 20:35:15 - testbed-node-4 2025-05-13 20:35:15.808187 | orchestrator | 2025-05-13 20:35:15 - testbed-node-2 2025-05-13 20:35:15.904194 | orchestrator | 2025-05-13 20:35:15 - testbed-node-1 2025-05-13 20:35:16.008886 | orchestrator | 2025-05-13 20:35:16 - clean up keypairs 2025-05-13 20:35:16.027996 | orchestrator | 2025-05-13 20:35:16 - testbed 2025-05-13 20:35:16.055680 | orchestrator | 2025-05-13 20:35:16 - wait for servers to be gone 2025-05-13 20:35:23.164738 | orchestrator | 2025-05-13 20:35:23 - clean up ports 2025-05-13 20:35:23.391628 | orchestrator | 2025-05-13 20:35:23 - 2533edc6-ae90-48f5-bce3-d234f05de5b8 2025-05-13 20:35:23.813295 | orchestrator | 2025-05-13 20:35:23 - 363d5a9d-48f8-4ee9-a4ff-022dd31dba0b 2025-05-13 20:35:24.031120 | orchestrator | 2025-05-13 20:35:24 - 5319c854-daf7-41b1-8578-6a9aa5e06bf0 2025-05-13 20:35:24.343655 | orchestrator | 2025-05-13 20:35:24 - 841e22fb-5248-4ad5-bc51-74f7a44481ee 2025-05-13 20:35:24.540145 | orchestrator | 2025-05-13 20:35:24 - 8c7357bb-6500-4ea2-92c4-d3e21060221f 2025-05-13 20:35:24.786480 | orchestrator | 2025-05-13 20:35:24 - 9f6ad65d-bc63-4a19-8402-f2972f2671f4 2025-05-13 20:35:24.975872 | orchestrator | 2025-05-13 20:35:24 - a1e09806-0bbd-4a75-915e-9fc887682f6e 2025-05-13 20:35:25.166135 | orchestrator | 2025-05-13 20:35:25 - clean up volumes 2025-05-13 20:35:25.311641 | orchestrator | 2025-05-13 20:35:25 - testbed-volume-manager-base 2025-05-13 20:35:25.352475 | orchestrator | 2025-05-13 20:35:25 - testbed-volume-2-node-base 2025-05-13 20:35:25.393245 | orchestrator | 2025-05-13 20:35:25 - testbed-volume-3-node-base 2025-05-13 20:35:25.431558 | orchestrator | 2025-05-13 20:35:25 - testbed-volume-4-node-base 2025-05-13 20:35:25.471230 | orchestrator | 2025-05-13 20:35:25 - testbed-volume-5-node-base 2025-05-13 20:35:25.513870 | orchestrator | 2025-05-13 20:35:25 - testbed-volume-1-node-base 2025-05-13 20:35:25.557034 | orchestrator | 2025-05-13 20:35:25 - testbed-volume-0-node-base 2025-05-13 20:35:25.598255 | orchestrator | 2025-05-13 20:35:25 - testbed-volume-0-node-3 2025-05-13 20:35:25.639034 | orchestrator | 2025-05-13 20:35:25 - testbed-volume-4-node-4 2025-05-13 20:35:25.681438 | orchestrator | 2025-05-13 20:35:25 - testbed-volume-1-node-4 2025-05-13 20:35:25.725251 | orchestrator | 2025-05-13 20:35:25 - testbed-volume-7-node-4 2025-05-13 20:35:25.767409 | orchestrator | 2025-05-13 20:35:25 - testbed-volume-8-node-5 2025-05-13 20:35:25.809220 | orchestrator | 2025-05-13 20:35:25 - testbed-volume-3-node-3 2025-05-13 20:35:25.851880 | orchestrator | 2025-05-13 20:35:25 - testbed-volume-2-node-5 2025-05-13 20:35:25.892528 | orchestrator | 2025-05-13 20:35:25 - testbed-volume-5-node-5 2025-05-13 20:35:25.931650 | orchestrator | 2025-05-13 20:35:25 - testbed-volume-6-node-3 2025-05-13 20:35:25.974510 | orchestrator | 2025-05-13 20:35:25 - disconnect routers 2025-05-13 20:35:26.069156 | orchestrator | 2025-05-13 20:35:26 - testbed 2025-05-13 20:35:26.741975 | orchestrator | 2025-05-13 20:35:26 - clean up subnets 2025-05-13 20:35:26.780110 | orchestrator | 2025-05-13 20:35:26 - subnet-testbed-management 2025-05-13 20:35:26.928913 | orchestrator | 2025-05-13 20:35:26 - clean up networks 2025-05-13 20:35:27.123248 | orchestrator | 2025-05-13 20:35:27 - net-testbed-management 2025-05-13 20:35:27.408733 | orchestrator | 2025-05-13 20:35:27 - clean up security groups 2025-05-13 20:35:27.445638 | orchestrator | 2025-05-13 20:35:27 - testbed-management 2025-05-13 20:35:27.536401 | orchestrator | 2025-05-13 20:35:27 - testbed-node 2025-05-13 20:35:27.631537 | orchestrator | 2025-05-13 20:35:27 - clean up floating ips 2025-05-13 20:35:27.664785 | orchestrator | 2025-05-13 20:35:27 - 81.163.192.173 2025-05-13 20:35:28.040924 | orchestrator | 2025-05-13 20:35:28 - clean up routers 2025-05-13 20:35:28.091318 | orchestrator | 2025-05-13 20:35:28 - testbed 2025-05-13 20:35:28.892424 | orchestrator | ok: Runtime: 0:00:15.697655 2025-05-13 20:35:28.895189 | 2025-05-13 20:35:28.895310 | PLAY RECAP 2025-05-13 20:35:28.895391 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-05-13 20:35:28.895432 | 2025-05-13 20:35:29.025193 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-13 20:35:29.027552 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-13 20:35:29.777962 | 2025-05-13 20:35:29.778140 | PLAY [Cleanup play] 2025-05-13 20:35:29.794992 | 2025-05-13 20:35:29.795146 | TASK [Set cloud fact (Zuul deployment)] 2025-05-13 20:35:29.853521 | orchestrator | ok 2025-05-13 20:35:29.863831 | 2025-05-13 20:35:29.863998 | TASK [Set cloud fact (local deployment)] 2025-05-13 20:35:29.899075 | orchestrator | skipping: Conditional result was False 2025-05-13 20:35:29.917136 | 2025-05-13 20:35:29.917299 | TASK [Clean the cloud environment] 2025-05-13 20:35:31.108049 | orchestrator | 2025-05-13 20:35:31 - clean up servers 2025-05-13 20:35:31.693909 | orchestrator | 2025-05-13 20:35:31 - clean up keypairs 2025-05-13 20:35:31.714541 | orchestrator | 2025-05-13 20:35:31 - wait for servers to be gone 2025-05-13 20:35:31.803242 | orchestrator | 2025-05-13 20:35:31 - clean up ports 2025-05-13 20:35:31.871866 | orchestrator | 2025-05-13 20:35:31 - clean up volumes 2025-05-13 20:35:31.950533 | orchestrator | 2025-05-13 20:35:31 - disconnect routers 2025-05-13 20:35:31.973885 | orchestrator | 2025-05-13 20:35:31 - clean up subnets 2025-05-13 20:35:31.996927 | orchestrator | 2025-05-13 20:35:31 - clean up networks 2025-05-13 20:35:32.131581 | orchestrator | 2025-05-13 20:35:32 - clean up security groups 2025-05-13 20:35:32.153411 | orchestrator | 2025-05-13 20:35:32 - clean up floating ips 2025-05-13 20:35:32.175987 | orchestrator | 2025-05-13 20:35:32 - clean up routers 2025-05-13 20:35:32.456124 | orchestrator | ok: Runtime: 0:00:01.451646 2025-05-13 20:35:32.460540 | 2025-05-13 20:35:32.460789 | PLAY RECAP 2025-05-13 20:35:32.460943 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-05-13 20:35:32.461007 | 2025-05-13 20:35:32.599694 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-13 20:35:32.601621 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-13 20:35:33.372126 | 2025-05-13 20:35:33.372330 | PLAY [Base post-fetch] 2025-05-13 20:35:33.389596 | 2025-05-13 20:35:33.389784 | TASK [fetch-output : Set log path for multiple nodes] 2025-05-13 20:35:33.456019 | orchestrator | skipping: Conditional result was False 2025-05-13 20:35:33.466952 | 2025-05-13 20:35:33.467165 | TASK [fetch-output : Set log path for single node] 2025-05-13 20:35:33.526190 | orchestrator | ok 2025-05-13 20:35:33.535504 | 2025-05-13 20:35:33.535668 | LOOP [fetch-output : Ensure local output dirs] 2025-05-13 20:35:34.033969 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/221dbb57df2c4a04a6bf0721f15dc81e/work/logs" 2025-05-13 20:35:34.312148 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/221dbb57df2c4a04a6bf0721f15dc81e/work/artifacts" 2025-05-13 20:35:34.584586 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/221dbb57df2c4a04a6bf0721f15dc81e/work/docs" 2025-05-13 20:35:34.609133 | 2025-05-13 20:35:34.609316 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-05-13 20:35:35.646586 | orchestrator | changed: .d..t...... ./ 2025-05-13 20:35:35.647163 | orchestrator | changed: All items complete 2025-05-13 20:35:35.647238 | 2025-05-13 20:35:36.341611 | orchestrator | changed: .d..t...... ./ 2025-05-13 20:35:37.074519 | orchestrator | changed: .d..t...... ./ 2025-05-13 20:35:37.106085 | 2025-05-13 20:35:37.106235 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-05-13 20:35:37.148032 | orchestrator | skipping: Conditional result was False 2025-05-13 20:35:37.150892 | orchestrator | skipping: Conditional result was False 2025-05-13 20:35:37.159992 | 2025-05-13 20:35:37.160081 | PLAY RECAP 2025-05-13 20:35:37.160134 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-05-13 20:35:37.160161 | 2025-05-13 20:35:37.289058 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-13 20:35:37.291419 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-13 20:35:38.048412 | 2025-05-13 20:35:38.048568 | PLAY [Base post] 2025-05-13 20:35:38.063130 | 2025-05-13 20:35:38.063258 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-05-13 20:35:39.058656 | orchestrator | changed 2025-05-13 20:35:39.069119 | 2025-05-13 20:35:39.069244 | PLAY RECAP 2025-05-13 20:35:39.069318 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-05-13 20:35:39.069394 | 2025-05-13 20:35:39.194057 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-13 20:35:39.195084 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-05-13 20:35:40.083119 | 2025-05-13 20:35:40.083306 | PLAY [Base post-logs] 2025-05-13 20:35:40.094601 | 2025-05-13 20:35:40.094790 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-05-13 20:35:40.570593 | localhost | changed 2025-05-13 20:35:40.589056 | 2025-05-13 20:35:40.589306 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-05-13 20:35:40.628762 | localhost | ok 2025-05-13 20:35:40.634923 | 2025-05-13 20:35:40.635083 | TASK [Set zuul-log-path fact] 2025-05-13 20:35:40.651972 | localhost | ok 2025-05-13 20:35:40.662528 | 2025-05-13 20:35:40.662658 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-13 20:35:40.689016 | localhost | ok 2025-05-13 20:35:40.693982 | 2025-05-13 20:35:40.694125 | TASK [upload-logs : Create log directories] 2025-05-13 20:35:41.205935 | localhost | changed 2025-05-13 20:35:41.211856 | 2025-05-13 20:35:41.212023 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-05-13 20:35:41.748606 | localhost -> localhost | ok: Runtime: 0:00:00.007292 2025-05-13 20:35:41.758690 | 2025-05-13 20:35:41.759009 | TASK [upload-logs : Upload logs to log server] 2025-05-13 20:35:42.360078 | localhost | Output suppressed because no_log was given 2025-05-13 20:35:42.364558 | 2025-05-13 20:35:42.364852 | LOOP [upload-logs : Compress console log and json output] 2025-05-13 20:35:42.413847 | localhost | skipping: Conditional result was False 2025-05-13 20:35:42.418792 | localhost | skipping: Conditional result was False 2025-05-13 20:35:42.432152 | 2025-05-13 20:35:42.432407 | LOOP [upload-logs : Upload compressed console log and json output] 2025-05-13 20:35:42.483246 | localhost | skipping: Conditional result was False 2025-05-13 20:35:42.484051 | 2025-05-13 20:35:42.487130 | localhost | skipping: Conditional result was False 2025-05-13 20:35:42.501347 | 2025-05-13 20:35:42.501588 | LOOP [upload-logs : Upload console log and json output]